00:00:00.002 Started by upstream project "autotest-per-patch" build number 132386 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.142 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:05.416 The recommended git tool is: git 00:00:05.416 using credential 00000000-0000-0000-0000-000000000002 00:00:05.419 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:05.434 Fetching changes from the remote Git repository 00:00:05.437 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:05.451 Using shallow fetch with depth 1 00:00:05.451 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:05.451 > git --version # timeout=10 00:00:05.463 > git --version # 'git version 2.39.2' 00:00:05.463 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:05.479 Setting http proxy: proxy-dmz.intel.com:911 00:00:05.479 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:12.127 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:12.143 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:12.157 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:12.157 > git config core.sparsecheckout # timeout=10 00:00:12.171 > git read-tree -mu HEAD # timeout=10 00:00:12.189 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:12.215 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:12.215 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:12.344 [Pipeline] Start of Pipeline 00:00:12.359 [Pipeline] library 00:00:12.361 Loading library shm_lib@master 00:00:12.361 Library shm_lib@master is cached. Copying from home. 00:00:12.372 [Pipeline] node 00:00:27.374 Still waiting to schedule task 00:00:27.374 Waiting for next available executor on ‘vagrant-vm-host’ 00:10:43.695 Running on VM-host-SM16 in /var/jenkins/workspace/nvme-vg-autotest 00:10:43.698 [Pipeline] { 00:10:43.706 [Pipeline] catchError 00:10:43.707 [Pipeline] { 00:10:43.716 [Pipeline] wrap 00:10:43.722 [Pipeline] { 00:10:43.727 [Pipeline] stage 00:10:43.728 [Pipeline] { (Prologue) 00:10:43.740 [Pipeline] echo 00:10:43.741 Node: VM-host-SM16 00:10:43.745 [Pipeline] cleanWs 00:10:43.752 [WS-CLEANUP] Deleting project workspace... 00:10:43.752 [WS-CLEANUP] Deferred wipeout is used... 00:10:43.758 [WS-CLEANUP] done 00:10:43.933 [Pipeline] setCustomBuildProperty 00:10:44.024 [Pipeline] httpRequest 00:10:44.344 [Pipeline] echo 00:10:44.347 Sorcerer 10.211.164.20 is alive 00:10:44.358 [Pipeline] retry 00:10:44.360 [Pipeline] { 00:10:44.374 [Pipeline] httpRequest 00:10:44.380 HttpMethod: GET 00:10:44.381 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:10:44.382 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:10:44.382 Response Code: HTTP/1.1 200 OK 00:10:44.382 Success: Status code 200 is in the accepted range: 200,404 00:10:44.383 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:10:44.528 [Pipeline] } 00:10:44.548 [Pipeline] // retry 00:10:44.557 [Pipeline] sh 00:10:44.835 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:10:44.846 [Pipeline] httpRequest 00:10:45.180 [Pipeline] echo 00:10:45.182 Sorcerer 10.211.164.20 is alive 00:10:45.193 [Pipeline] retry 00:10:45.195 [Pipeline] { 00:10:45.210 [Pipeline] httpRequest 00:10:45.214 HttpMethod: GET 00:10:45.215 URL: http://10.211.164.20/packages/spdk_92fb22519345bcb309a617ae4ad1cb7eebce6f14.tar.gz 00:10:45.216 Sending request to url: http://10.211.164.20/packages/spdk_92fb22519345bcb309a617ae4ad1cb7eebce6f14.tar.gz 00:10:45.216 Response Code: HTTP/1.1 200 OK 00:10:45.217 Success: Status code 200 is in the accepted range: 200,404 00:10:45.218 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_92fb22519345bcb309a617ae4ad1cb7eebce6f14.tar.gz 00:10:47.482 [Pipeline] } 00:10:47.501 [Pipeline] // retry 00:10:47.511 [Pipeline] sh 00:10:47.791 + tar --no-same-owner -xf spdk_92fb22519345bcb309a617ae4ad1cb7eebce6f14.tar.gz 00:10:51.088 [Pipeline] sh 00:10:51.370 + git -C spdk log --oneline -n5 00:10:51.370 92fb22519 dif: dif_generate/verify_copy() supports NVMe PRACT = 1 and MD size > PI size 00:10:51.370 79daf868a dif: Add SPDK_DIF_FLAGS_NVME_PRACT for dif_generate/verify_copy() 00:10:51.370 431baf1b5 dif: Insert abstraction into dif_generate/verify_copy() for NVMe PRACT 00:10:51.370 f86091626 dif: Rename internal generate/verify_copy() by insert/strip_copy() 00:10:51.370 0383e688b bdev/nvme: Fix race between reset and qpair creation/deletion 00:10:51.390 [Pipeline] writeFile 00:10:51.431 [Pipeline] sh 00:10:51.738 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:10:51.751 [Pipeline] sh 00:10:52.030 + cat autorun-spdk.conf 00:10:52.030 SPDK_RUN_FUNCTIONAL_TEST=1 00:10:52.030 SPDK_TEST_NVME=1 00:10:52.030 SPDK_TEST_FTL=1 00:10:52.030 SPDK_TEST_ISAL=1 00:10:52.030 SPDK_RUN_ASAN=1 00:10:52.030 SPDK_RUN_UBSAN=1 00:10:52.030 SPDK_TEST_XNVME=1 00:10:52.030 SPDK_TEST_NVME_FDP=1 00:10:52.030 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:10:52.037 RUN_NIGHTLY=0 00:10:52.039 [Pipeline] } 00:10:52.054 [Pipeline] // stage 00:10:52.070 [Pipeline] stage 00:10:52.073 [Pipeline] { (Run VM) 00:10:52.087 [Pipeline] sh 00:10:52.366 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:10:52.366 + echo 'Start stage prepare_nvme.sh' 00:10:52.366 Start stage prepare_nvme.sh 00:10:52.366 + [[ -n 4 ]] 00:10:52.366 + disk_prefix=ex4 00:10:52.366 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:10:52.366 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:10:52.366 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:10:52.366 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:10:52.366 ++ SPDK_TEST_NVME=1 00:10:52.366 ++ SPDK_TEST_FTL=1 00:10:52.366 ++ SPDK_TEST_ISAL=1 00:10:52.366 ++ SPDK_RUN_ASAN=1 00:10:52.366 ++ SPDK_RUN_UBSAN=1 00:10:52.366 ++ SPDK_TEST_XNVME=1 00:10:52.366 ++ SPDK_TEST_NVME_FDP=1 00:10:52.366 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:10:52.366 ++ RUN_NIGHTLY=0 00:10:52.366 + cd /var/jenkins/workspace/nvme-vg-autotest 00:10:52.366 + nvme_files=() 00:10:52.366 + declare -A nvme_files 00:10:52.366 + backend_dir=/var/lib/libvirt/images/backends 00:10:52.366 + nvme_files['nvme.img']=5G 00:10:52.366 + nvme_files['nvme-cmb.img']=5G 00:10:52.366 + nvme_files['nvme-multi0.img']=4G 00:10:52.366 + nvme_files['nvme-multi1.img']=4G 00:10:52.366 + nvme_files['nvme-multi2.img']=4G 00:10:52.366 + nvme_files['nvme-openstack.img']=8G 00:10:52.366 + nvme_files['nvme-zns.img']=5G 00:10:52.366 + (( SPDK_TEST_NVME_PMR == 1 )) 00:10:52.366 + (( SPDK_TEST_FTL == 1 )) 00:10:52.366 + nvme_files["nvme-ftl.img"]=6G 00:10:52.366 + (( SPDK_TEST_NVME_FDP == 1 )) 00:10:52.366 + nvme_files["nvme-fdp.img"]=1G 00:10:52.366 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:10:52.366 + for nvme in "${!nvme_files[@]}" 00:10:52.366 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:10:52.366 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:10:52.366 + for nvme in "${!nvme_files[@]}" 00:10:52.366 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-ftl.img -s 6G 00:10:53.302 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:10:53.302 + for nvme in "${!nvme_files[@]}" 00:10:53.302 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:10:53.302 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:10:53.302 + for nvme in "${!nvme_files[@]}" 00:10:53.302 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:10:53.302 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:10:53.302 + for nvme in "${!nvme_files[@]}" 00:10:53.302 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:10:53.302 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:10:53.302 + for nvme in "${!nvme_files[@]}" 00:10:53.302 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:10:53.302 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:10:53.302 + for nvme in "${!nvme_files[@]}" 00:10:53.302 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:10:53.302 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:10:53.302 + for nvme in "${!nvme_files[@]}" 00:10:53.302 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-fdp.img -s 1G 00:10:53.560 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:10:53.560 + for nvme in "${!nvme_files[@]}" 00:10:53.560 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:10:53.560 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:10:53.560 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:10:53.560 + echo 'End stage prepare_nvme.sh' 00:10:53.560 End stage prepare_nvme.sh 00:10:53.571 [Pipeline] sh 00:10:53.851 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:10:53.851 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex4-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:10:53.851 00:10:53.851 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:10:53.851 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:10:53.851 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:10:53.851 HELP=0 00:10:53.851 DRY_RUN=0 00:10:53.851 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,/var/lib/libvirt/images/backends/ex4-nvme-fdp.img, 00:10:53.851 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:10:53.851 NVME_AUTO_CREATE=0 00:10:53.851 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,, 00:10:53.851 NVME_CMB=,,,, 00:10:53.851 NVME_PMR=,,,, 00:10:53.851 NVME_ZNS=,,,, 00:10:53.851 NVME_MS=true,,,, 00:10:53.851 NVME_FDP=,,,on, 00:10:53.851 SPDK_VAGRANT_DISTRO=fedora39 00:10:53.851 SPDK_VAGRANT_VMCPU=10 00:10:53.851 SPDK_VAGRANT_VMRAM=12288 00:10:53.851 SPDK_VAGRANT_PROVIDER=libvirt 00:10:53.851 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:10:53.851 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:10:53.851 SPDK_OPENSTACK_NETWORK=0 00:10:53.851 VAGRANT_PACKAGE_BOX=0 00:10:53.851 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:10:53.851 FORCE_DISTRO=true 00:10:53.851 VAGRANT_BOX_VERSION= 00:10:53.851 EXTRA_VAGRANTFILES= 00:10:53.851 NIC_MODEL=e1000 00:10:53.851 00:10:53.851 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:10:53.851 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:10:57.137 Bringing machine 'default' up with 'libvirt' provider... 00:10:57.396 ==> default: Creating image (snapshot of base box volume). 00:10:57.655 ==> default: Creating domain with the following settings... 00:10:57.655 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732101902_ceba2fb2dfb992bfb029 00:10:57.655 ==> default: -- Domain type: kvm 00:10:57.655 ==> default: -- Cpus: 10 00:10:57.655 ==> default: -- Feature: acpi 00:10:57.655 ==> default: -- Feature: apic 00:10:57.656 ==> default: -- Feature: pae 00:10:57.656 ==> default: -- Memory: 12288M 00:10:57.656 ==> default: -- Memory Backing: hugepages: 00:10:57.656 ==> default: -- Management MAC: 00:10:57.656 ==> default: -- Loader: 00:10:57.656 ==> default: -- Nvram: 00:10:57.656 ==> default: -- Base box: spdk/fedora39 00:10:57.656 ==> default: -- Storage pool: default 00:10:57.656 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732101902_ceba2fb2dfb992bfb029.img (20G) 00:10:57.656 ==> default: -- Volume Cache: default 00:10:57.656 ==> default: -- Kernel: 00:10:57.656 ==> default: -- Initrd: 00:10:57.656 ==> default: -- Graphics Type: vnc 00:10:57.656 ==> default: -- Graphics Port: -1 00:10:57.656 ==> default: -- Graphics IP: 127.0.0.1 00:10:57.656 ==> default: -- Graphics Password: Not defined 00:10:57.656 ==> default: -- Video Type: cirrus 00:10:57.656 ==> default: -- Video VRAM: 9216 00:10:57.656 ==> default: -- Sound Type: 00:10:57.656 ==> default: -- Keymap: en-us 00:10:57.656 ==> default: -- TPM Path: 00:10:57.656 ==> default: -- INPUT: type=mouse, bus=ps2 00:10:57.656 ==> default: -- Command line args: 00:10:57.656 ==> default: -> value=-device, 00:10:57.656 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:10:57.656 ==> default: -> value=-drive, 00:10:57.656 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:10:57.656 ==> default: -> value=-device, 00:10:57.656 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:10:57.656 ==> default: -> value=-device, 00:10:57.656 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:10:57.656 ==> default: -> value=-drive, 00:10:57.656 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-1-drive0, 00:10:57.656 ==> default: -> value=-device, 00:10:57.656 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:10:57.656 ==> default: -> value=-device, 00:10:57.656 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:10:57.656 ==> default: -> value=-drive, 00:10:57.656 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:10:57.656 ==> default: -> value=-device, 00:10:57.656 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:10:57.656 ==> default: -> value=-drive, 00:10:57.656 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:10:57.656 ==> default: -> value=-device, 00:10:57.656 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:10:57.656 ==> default: -> value=-drive, 00:10:57.656 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:10:57.656 ==> default: -> value=-device, 00:10:57.656 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:10:57.656 ==> default: -> value=-device, 00:10:57.656 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:10:57.656 ==> default: -> value=-device, 00:10:57.656 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:10:57.656 ==> default: -> value=-drive, 00:10:57.656 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:10:57.656 ==> default: -> value=-device, 00:10:57.656 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:10:57.915 ==> default: Creating shared folders metadata... 00:10:57.915 ==> default: Starting domain. 00:10:59.817 ==> default: Waiting for domain to get an IP address... 00:11:14.713 ==> default: Waiting for SSH to become available... 00:11:15.659 ==> default: Configuring and enabling network interfaces... 00:11:20.926 default: SSH address: 192.168.121.44:22 00:11:20.926 default: SSH username: vagrant 00:11:20.926 default: SSH auth method: private key 00:11:22.924 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:11:31.053 ==> default: Mounting SSHFS shared folder... 00:11:32.431 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:11:32.431 ==> default: Checking Mount.. 00:11:33.807 ==> default: Folder Successfully Mounted! 00:11:33.807 ==> default: Running provisioner: file... 00:11:34.383 default: ~/.gitconfig => .gitconfig 00:11:34.948 00:11:34.948 SUCCESS! 00:11:34.948 00:11:34.948 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:11:34.948 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:11:34.948 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:11:34.948 00:11:34.957 [Pipeline] } 00:11:34.974 [Pipeline] // stage 00:11:34.984 [Pipeline] dir 00:11:34.985 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:11:34.986 [Pipeline] { 00:11:35.001 [Pipeline] catchError 00:11:35.003 [Pipeline] { 00:11:35.015 [Pipeline] sh 00:11:35.293 + vagrant ssh-config --host vagrant 00:11:35.293 + sed -ne /^Host/,$p 00:11:35.293 + tee ssh_conf 00:11:39.485 Host vagrant 00:11:39.485 HostName 192.168.121.44 00:11:39.485 User vagrant 00:11:39.485 Port 22 00:11:39.485 UserKnownHostsFile /dev/null 00:11:39.485 StrictHostKeyChecking no 00:11:39.485 PasswordAuthentication no 00:11:39.485 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:11:39.485 IdentitiesOnly yes 00:11:39.485 LogLevel FATAL 00:11:39.485 ForwardAgent yes 00:11:39.485 ForwardX11 yes 00:11:39.485 00:11:39.499 [Pipeline] withEnv 00:11:39.502 [Pipeline] { 00:11:39.514 [Pipeline] sh 00:11:39.793 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:11:39.793 source /etc/os-release 00:11:39.793 [[ -e /image.version ]] && img=$(< /image.version) 00:11:39.793 # Minimal, systemd-like check. 00:11:39.793 if [[ -e /.dockerenv ]]; then 00:11:39.793 # Clear garbage from the node's name: 00:11:39.793 # agt-er_autotest_547-896 -> autotest_547-896 00:11:39.793 # $HOSTNAME is the actual container id 00:11:39.793 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:11:39.793 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:11:39.793 # We can assume this is a mount from a host where container is running, 00:11:39.793 # so fetch its hostname to easily identify the target swarm worker. 00:11:39.793 container="$(< /etc/hostname) ($agent)" 00:11:39.793 else 00:11:39.793 # Fallback 00:11:39.793 container=$agent 00:11:39.793 fi 00:11:39.793 fi 00:11:39.793 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:11:39.793 00:11:40.065 [Pipeline] } 00:11:40.082 [Pipeline] // withEnv 00:11:40.092 [Pipeline] setCustomBuildProperty 00:11:40.107 [Pipeline] stage 00:11:40.109 [Pipeline] { (Tests) 00:11:40.128 [Pipeline] sh 00:11:40.408 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:11:40.680 [Pipeline] sh 00:11:40.963 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:11:41.237 [Pipeline] timeout 00:11:41.238 Timeout set to expire in 50 min 00:11:41.240 [Pipeline] { 00:11:41.255 [Pipeline] sh 00:11:41.536 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:11:42.103 HEAD is now at 92fb22519 dif: dif_generate/verify_copy() supports NVMe PRACT = 1 and MD size > PI size 00:11:42.114 [Pipeline] sh 00:11:42.392 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:11:42.664 [Pipeline] sh 00:11:42.969 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:11:43.063 [Pipeline] sh 00:11:43.345 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:11:43.605 ++ readlink -f spdk_repo 00:11:43.605 + DIR_ROOT=/home/vagrant/spdk_repo 00:11:43.605 + [[ -n /home/vagrant/spdk_repo ]] 00:11:43.605 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:11:43.605 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:11:43.605 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:11:43.605 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:11:43.605 + [[ -d /home/vagrant/spdk_repo/output ]] 00:11:43.605 + [[ nvme-vg-autotest == pkgdep-* ]] 00:11:43.605 + cd /home/vagrant/spdk_repo 00:11:43.605 + source /etc/os-release 00:11:43.605 ++ NAME='Fedora Linux' 00:11:43.605 ++ VERSION='39 (Cloud Edition)' 00:11:43.605 ++ ID=fedora 00:11:43.605 ++ VERSION_ID=39 00:11:43.605 ++ VERSION_CODENAME= 00:11:43.605 ++ PLATFORM_ID=platform:f39 00:11:43.605 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:11:43.605 ++ ANSI_COLOR='0;38;2;60;110;180' 00:11:43.605 ++ LOGO=fedora-logo-icon 00:11:43.605 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:11:43.605 ++ HOME_URL=https://fedoraproject.org/ 00:11:43.605 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:11:43.605 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:11:43.605 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:11:43.605 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:11:43.605 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:11:43.605 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:11:43.605 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:11:43.605 ++ SUPPORT_END=2024-11-12 00:11:43.605 ++ VARIANT='Cloud Edition' 00:11:43.605 ++ VARIANT_ID=cloud 00:11:43.605 + uname -a 00:11:43.605 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:11:43.605 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:11:43.863 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:44.121 Hugepages 00:11:44.121 node hugesize free / total 00:11:44.121 node0 1048576kB 0 / 0 00:11:44.121 node0 2048kB 0 / 0 00:11:44.121 00:11:44.121 Type BDF Vendor Device NUMA Driver Device Block devices 00:11:44.380 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:11:44.380 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:11:44.380 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:11:44.380 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:11:44.380 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:11:44.380 + rm -f /tmp/spdk-ld-path 00:11:44.380 + source autorun-spdk.conf 00:11:44.380 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:11:44.380 ++ SPDK_TEST_NVME=1 00:11:44.380 ++ SPDK_TEST_FTL=1 00:11:44.380 ++ SPDK_TEST_ISAL=1 00:11:44.380 ++ SPDK_RUN_ASAN=1 00:11:44.380 ++ SPDK_RUN_UBSAN=1 00:11:44.380 ++ SPDK_TEST_XNVME=1 00:11:44.380 ++ SPDK_TEST_NVME_FDP=1 00:11:44.380 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:11:44.380 ++ RUN_NIGHTLY=0 00:11:44.380 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:11:44.380 + [[ -n '' ]] 00:11:44.380 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:11:44.380 + for M in /var/spdk/build-*-manifest.txt 00:11:44.380 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:11:44.380 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:11:44.380 + for M in /var/spdk/build-*-manifest.txt 00:11:44.380 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:11:44.380 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:11:44.380 + for M in /var/spdk/build-*-manifest.txt 00:11:44.380 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:11:44.380 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:11:44.380 ++ uname 00:11:44.380 + [[ Linux == \L\i\n\u\x ]] 00:11:44.380 + sudo dmesg -T 00:11:44.380 + sudo dmesg --clear 00:11:44.380 + dmesg_pid=5403 00:11:44.380 + sudo dmesg -Tw 00:11:44.380 + [[ Fedora Linux == FreeBSD ]] 00:11:44.380 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:44.380 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:44.380 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:11:44.380 + [[ -x /usr/src/fio-static/fio ]] 00:11:44.380 + export FIO_BIN=/usr/src/fio-static/fio 00:11:44.380 + FIO_BIN=/usr/src/fio-static/fio 00:11:44.380 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:11:44.380 + [[ ! -v VFIO_QEMU_BIN ]] 00:11:44.380 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:11:44.380 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:44.380 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:44.380 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:11:44.380 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:44.380 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:44.380 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:11:44.638 11:25:50 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:11:44.638 11:25:50 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:11:44.638 11:25:50 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:11:44.638 11:25:50 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:11:44.638 11:25:50 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:11:44.638 11:25:50 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:11:44.638 11:25:50 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:11:44.638 11:25:50 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:11:44.638 11:25:50 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:11:44.638 11:25:50 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:11:44.638 11:25:50 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:11:44.638 11:25:50 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:11:44.638 11:25:50 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:11:44.638 11:25:50 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:11:44.638 11:25:50 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:11:44.638 11:25:50 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:44.638 11:25:50 -- scripts/common.sh@15 -- $ shopt -s extglob 00:11:44.638 11:25:50 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:11:44.638 11:25:50 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:44.638 11:25:50 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:44.639 11:25:50 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.639 11:25:50 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.639 11:25:50 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.639 11:25:50 -- paths/export.sh@5 -- $ export PATH 00:11:44.639 11:25:50 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.639 11:25:50 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:11:44.639 11:25:50 -- common/autobuild_common.sh@493 -- $ date +%s 00:11:44.639 11:25:50 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732101950.XXXXXX 00:11:44.639 11:25:50 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732101950.0blZcu 00:11:44.639 11:25:50 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:11:44.639 11:25:50 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:11:44.639 11:25:50 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:11:44.639 11:25:50 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:11:44.639 11:25:50 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:11:44.639 11:25:50 -- common/autobuild_common.sh@509 -- $ get_config_params 00:11:44.639 11:25:50 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:11:44.639 11:25:50 -- common/autotest_common.sh@10 -- $ set +x 00:11:44.639 11:25:50 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:11:44.639 11:25:50 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:11:44.639 11:25:50 -- pm/common@17 -- $ local monitor 00:11:44.639 11:25:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:11:44.639 11:25:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:11:44.639 11:25:50 -- pm/common@25 -- $ sleep 1 00:11:44.639 11:25:50 -- pm/common@21 -- $ date +%s 00:11:44.639 11:25:50 -- pm/common@21 -- $ date +%s 00:11:44.639 11:25:50 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732101950 00:11:44.639 11:25:50 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732101950 00:11:44.639 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732101950_collect-cpu-load.pm.log 00:11:44.639 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732101950_collect-vmstat.pm.log 00:11:45.571 11:25:51 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:11:45.571 11:25:51 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:11:45.571 11:25:51 -- spdk/autobuild.sh@12 -- $ umask 022 00:11:45.571 11:25:51 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:11:45.571 11:25:51 -- spdk/autobuild.sh@16 -- $ date -u 00:11:45.571 Wed Nov 20 11:25:51 AM UTC 2024 00:11:45.571 11:25:51 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:11:45.571 v25.01-pre-217-g92fb22519 00:11:45.571 11:25:51 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:11:45.571 11:25:51 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:11:45.571 11:25:51 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:11:45.571 11:25:51 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:11:45.571 11:25:51 -- common/autotest_common.sh@10 -- $ set +x 00:11:45.571 ************************************ 00:11:45.571 START TEST asan 00:11:45.571 ************************************ 00:11:45.571 using asan 00:11:45.571 11:25:51 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:11:45.571 00:11:45.571 real 0m0.000s 00:11:45.571 user 0m0.000s 00:11:45.571 sys 0m0.000s 00:11:45.571 11:25:51 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:11:45.571 11:25:51 asan -- common/autotest_common.sh@10 -- $ set +x 00:11:45.571 ************************************ 00:11:45.571 END TEST asan 00:11:45.571 ************************************ 00:11:45.571 11:25:51 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:11:45.571 11:25:51 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:11:45.571 11:25:51 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:11:45.571 11:25:51 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:11:45.571 11:25:51 -- common/autotest_common.sh@10 -- $ set +x 00:11:45.571 ************************************ 00:11:45.571 START TEST ubsan 00:11:45.571 ************************************ 00:11:45.571 using ubsan 00:11:45.571 11:25:51 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:11:45.571 00:11:45.571 real 0m0.000s 00:11:45.571 user 0m0.000s 00:11:45.571 sys 0m0.000s 00:11:45.571 11:25:51 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:11:45.571 11:25:51 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:11:45.571 ************************************ 00:11:45.571 END TEST ubsan 00:11:45.571 ************************************ 00:11:45.829 11:25:51 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:11:45.829 11:25:51 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:11:45.829 11:25:51 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:11:45.829 11:25:51 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:11:45.829 11:25:51 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:11:45.829 11:25:51 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:11:45.829 11:25:51 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:11:45.829 11:25:51 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:11:45.829 11:25:51 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:11:45.829 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:45.829 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:11:46.394 Using 'verbs' RDMA provider 00:11:59.535 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:12:14.460 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:12:14.460 Creating mk/config.mk...done. 00:12:14.460 Creating mk/cc.flags.mk...done. 00:12:14.460 Type 'make' to build. 00:12:14.460 11:26:18 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:12:14.460 11:26:18 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:12:14.460 11:26:18 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:12:14.460 11:26:18 -- common/autotest_common.sh@10 -- $ set +x 00:12:14.460 ************************************ 00:12:14.460 START TEST make 00:12:14.460 ************************************ 00:12:14.460 11:26:18 make -- common/autotest_common.sh@1129 -- $ make -j10 00:12:14.460 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:12:14.460 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:12:14.460 meson setup builddir \ 00:12:14.460 -Dwith-libaio=enabled \ 00:12:14.460 -Dwith-liburing=enabled \ 00:12:14.460 -Dwith-libvfn=disabled \ 00:12:14.460 -Dwith-spdk=disabled \ 00:12:14.460 -Dexamples=false \ 00:12:14.460 -Dtests=false \ 00:12:14.460 -Dtools=false && \ 00:12:14.460 meson compile -C builddir && \ 00:12:14.460 cd -) 00:12:14.460 make[1]: Nothing to be done for 'all'. 00:12:15.837 The Meson build system 00:12:15.837 Version: 1.5.0 00:12:15.837 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:12:15.837 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:12:15.837 Build type: native build 00:12:15.837 Project name: xnvme 00:12:15.837 Project version: 0.7.5 00:12:15.837 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:12:15.837 C linker for the host machine: cc ld.bfd 2.40-14 00:12:15.837 Host machine cpu family: x86_64 00:12:15.837 Host machine cpu: x86_64 00:12:15.837 Message: host_machine.system: linux 00:12:15.837 Compiler for C supports arguments -Wno-missing-braces: YES 00:12:15.837 Compiler for C supports arguments -Wno-cast-function-type: YES 00:12:15.837 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:12:15.837 Run-time dependency threads found: YES 00:12:15.837 Has header "setupapi.h" : NO 00:12:15.837 Has header "linux/blkzoned.h" : YES 00:12:15.837 Has header "linux/blkzoned.h" : YES (cached) 00:12:15.837 Has header "libaio.h" : YES 00:12:15.837 Library aio found: YES 00:12:15.837 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:12:15.837 Run-time dependency liburing found: YES 2.2 00:12:15.837 Dependency libvfn skipped: feature with-libvfn disabled 00:12:15.837 Found CMake: /usr/bin/cmake (3.27.7) 00:12:15.837 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:12:15.837 Subproject spdk : skipped: feature with-spdk disabled 00:12:15.837 Run-time dependency appleframeworks found: NO (tried framework) 00:12:15.837 Run-time dependency appleframeworks found: NO (tried framework) 00:12:15.837 Library rt found: YES 00:12:15.837 Checking for function "clock_gettime" with dependency -lrt: YES 00:12:15.837 Configuring xnvme_config.h using configuration 00:12:15.837 Configuring xnvme.spec using configuration 00:12:15.837 Run-time dependency bash-completion found: YES 2.11 00:12:15.837 Message: Bash-completions: /usr/share/bash-completion/completions 00:12:15.837 Program cp found: YES (/usr/bin/cp) 00:12:15.837 Build targets in project: 3 00:12:15.837 00:12:15.837 xnvme 0.7.5 00:12:15.837 00:12:15.837 Subprojects 00:12:15.837 spdk : NO Feature 'with-spdk' disabled 00:12:15.837 00:12:15.837 User defined options 00:12:15.837 examples : false 00:12:15.837 tests : false 00:12:15.837 tools : false 00:12:15.837 with-libaio : enabled 00:12:15.837 with-liburing: enabled 00:12:15.837 with-libvfn : disabled 00:12:15.837 with-spdk : disabled 00:12:15.837 00:12:15.837 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:12:16.095 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:12:16.095 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:12:16.355 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:12:16.355 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:12:16.355 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:12:16.355 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:12:16.355 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:12:16.355 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:12:16.355 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:12:16.355 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:12:16.355 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:12:16.355 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:12:16.355 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:12:16.355 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:12:16.355 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:12:16.355 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:12:16.614 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:12:16.614 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:12:16.614 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:12:16.614 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:12:16.614 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:12:16.614 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:12:16.614 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:12:16.614 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:12:16.614 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:12:16.614 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:12:16.614 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:12:16.614 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:12:16.614 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:12:16.614 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:12:16.614 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:12:16.614 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:12:16.614 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:12:16.614 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:12:16.614 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:12:16.614 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:12:16.614 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:12:16.614 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:12:16.614 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:12:16.614 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:12:16.614 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:12:16.614 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:12:16.614 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:12:16.614 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:12:16.614 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:12:16.874 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:12:16.874 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:12:16.874 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:12:16.874 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:12:16.874 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:12:16.874 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:12:16.874 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:12:16.874 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:12:16.874 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:12:16.874 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:12:16.874 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:12:16.874 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:12:16.874 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:12:16.874 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:12:16.874 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:12:16.874 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:12:16.874 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:12:16.874 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:12:16.874 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:12:16.874 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:12:17.133 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:12:17.133 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:12:17.133 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:12:17.133 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:12:17.133 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:12:17.133 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:12:17.133 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:12:17.133 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:12:17.133 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:12:17.700 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:12:17.700 [75/76] Linking static target lib/libxnvme.a 00:12:17.700 [76/76] Linking target lib/libxnvme.so.0.7.5 00:12:17.700 INFO: autodetecting backend as ninja 00:12:17.700 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:12:17.700 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:12:27.672 The Meson build system 00:12:27.672 Version: 1.5.0 00:12:27.672 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:12:27.672 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:12:27.672 Build type: native build 00:12:27.672 Program cat found: YES (/usr/bin/cat) 00:12:27.672 Project name: DPDK 00:12:27.672 Project version: 24.03.0 00:12:27.672 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:12:27.672 C linker for the host machine: cc ld.bfd 2.40-14 00:12:27.672 Host machine cpu family: x86_64 00:12:27.672 Host machine cpu: x86_64 00:12:27.672 Message: ## Building in Developer Mode ## 00:12:27.672 Program pkg-config found: YES (/usr/bin/pkg-config) 00:12:27.672 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:12:27.672 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:12:27.672 Program python3 found: YES (/usr/bin/python3) 00:12:27.672 Program cat found: YES (/usr/bin/cat) 00:12:27.672 Compiler for C supports arguments -march=native: YES 00:12:27.672 Checking for size of "void *" : 8 00:12:27.672 Checking for size of "void *" : 8 (cached) 00:12:27.672 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:12:27.672 Library m found: YES 00:12:27.672 Library numa found: YES 00:12:27.672 Has header "numaif.h" : YES 00:12:27.672 Library fdt found: NO 00:12:27.672 Library execinfo found: NO 00:12:27.672 Has header "execinfo.h" : YES 00:12:27.672 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:12:27.672 Run-time dependency libarchive found: NO (tried pkgconfig) 00:12:27.672 Run-time dependency libbsd found: NO (tried pkgconfig) 00:12:27.672 Run-time dependency jansson found: NO (tried pkgconfig) 00:12:27.672 Run-time dependency openssl found: YES 3.1.1 00:12:27.672 Run-time dependency libpcap found: YES 1.10.4 00:12:27.672 Has header "pcap.h" with dependency libpcap: YES 00:12:27.672 Compiler for C supports arguments -Wcast-qual: YES 00:12:27.672 Compiler for C supports arguments -Wdeprecated: YES 00:12:27.672 Compiler for C supports arguments -Wformat: YES 00:12:27.672 Compiler for C supports arguments -Wformat-nonliteral: NO 00:12:27.672 Compiler for C supports arguments -Wformat-security: NO 00:12:27.672 Compiler for C supports arguments -Wmissing-declarations: YES 00:12:27.672 Compiler for C supports arguments -Wmissing-prototypes: YES 00:12:27.672 Compiler for C supports arguments -Wnested-externs: YES 00:12:27.672 Compiler for C supports arguments -Wold-style-definition: YES 00:12:27.672 Compiler for C supports arguments -Wpointer-arith: YES 00:12:27.672 Compiler for C supports arguments -Wsign-compare: YES 00:12:27.672 Compiler for C supports arguments -Wstrict-prototypes: YES 00:12:27.672 Compiler for C supports arguments -Wundef: YES 00:12:27.672 Compiler for C supports arguments -Wwrite-strings: YES 00:12:27.672 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:12:27.672 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:12:27.672 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:12:27.672 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:12:27.672 Program objdump found: YES (/usr/bin/objdump) 00:12:27.672 Compiler for C supports arguments -mavx512f: YES 00:12:27.672 Checking if "AVX512 checking" compiles: YES 00:12:27.672 Fetching value of define "__SSE4_2__" : 1 00:12:27.672 Fetching value of define "__AES__" : 1 00:12:27.672 Fetching value of define "__AVX__" : 1 00:12:27.672 Fetching value of define "__AVX2__" : 1 00:12:27.672 Fetching value of define "__AVX512BW__" : (undefined) 00:12:27.672 Fetching value of define "__AVX512CD__" : (undefined) 00:12:27.672 Fetching value of define "__AVX512DQ__" : (undefined) 00:12:27.672 Fetching value of define "__AVX512F__" : (undefined) 00:12:27.672 Fetching value of define "__AVX512VL__" : (undefined) 00:12:27.672 Fetching value of define "__PCLMUL__" : 1 00:12:27.672 Fetching value of define "__RDRND__" : 1 00:12:27.672 Fetching value of define "__RDSEED__" : 1 00:12:27.672 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:12:27.672 Fetching value of define "__znver1__" : (undefined) 00:12:27.672 Fetching value of define "__znver2__" : (undefined) 00:12:27.672 Fetching value of define "__znver3__" : (undefined) 00:12:27.672 Fetching value of define "__znver4__" : (undefined) 00:12:27.672 Library asan found: YES 00:12:27.672 Compiler for C supports arguments -Wno-format-truncation: YES 00:12:27.672 Message: lib/log: Defining dependency "log" 00:12:27.672 Message: lib/kvargs: Defining dependency "kvargs" 00:12:27.672 Message: lib/telemetry: Defining dependency "telemetry" 00:12:27.672 Library rt found: YES 00:12:27.672 Checking for function "getentropy" : NO 00:12:27.672 Message: lib/eal: Defining dependency "eal" 00:12:27.672 Message: lib/ring: Defining dependency "ring" 00:12:27.672 Message: lib/rcu: Defining dependency "rcu" 00:12:27.672 Message: lib/mempool: Defining dependency "mempool" 00:12:27.672 Message: lib/mbuf: Defining dependency "mbuf" 00:12:27.672 Fetching value of define "__PCLMUL__" : 1 (cached) 00:12:27.672 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:12:27.672 Compiler for C supports arguments -mpclmul: YES 00:12:27.672 Compiler for C supports arguments -maes: YES 00:12:27.672 Compiler for C supports arguments -mavx512f: YES (cached) 00:12:27.672 Compiler for C supports arguments -mavx512bw: YES 00:12:27.672 Compiler for C supports arguments -mavx512dq: YES 00:12:27.672 Compiler for C supports arguments -mavx512vl: YES 00:12:27.672 Compiler for C supports arguments -mvpclmulqdq: YES 00:12:27.672 Compiler for C supports arguments -mavx2: YES 00:12:27.672 Compiler for C supports arguments -mavx: YES 00:12:27.672 Message: lib/net: Defining dependency "net" 00:12:27.672 Message: lib/meter: Defining dependency "meter" 00:12:27.672 Message: lib/ethdev: Defining dependency "ethdev" 00:12:27.672 Message: lib/pci: Defining dependency "pci" 00:12:27.672 Message: lib/cmdline: Defining dependency "cmdline" 00:12:27.672 Message: lib/hash: Defining dependency "hash" 00:12:27.672 Message: lib/timer: Defining dependency "timer" 00:12:27.672 Message: lib/compressdev: Defining dependency "compressdev" 00:12:27.672 Message: lib/cryptodev: Defining dependency "cryptodev" 00:12:27.672 Message: lib/dmadev: Defining dependency "dmadev" 00:12:27.672 Compiler for C supports arguments -Wno-cast-qual: YES 00:12:27.672 Message: lib/power: Defining dependency "power" 00:12:27.672 Message: lib/reorder: Defining dependency "reorder" 00:12:27.672 Message: lib/security: Defining dependency "security" 00:12:27.672 Has header "linux/userfaultfd.h" : YES 00:12:27.672 Has header "linux/vduse.h" : YES 00:12:27.672 Message: lib/vhost: Defining dependency "vhost" 00:12:27.672 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:12:27.672 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:12:27.672 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:12:27.672 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:12:27.672 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:12:27.672 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:12:27.672 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:12:27.672 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:12:27.672 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:12:27.672 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:12:27.672 Program doxygen found: YES (/usr/local/bin/doxygen) 00:12:27.672 Configuring doxy-api-html.conf using configuration 00:12:27.672 Configuring doxy-api-man.conf using configuration 00:12:27.672 Program mandb found: YES (/usr/bin/mandb) 00:12:27.672 Program sphinx-build found: NO 00:12:27.672 Configuring rte_build_config.h using configuration 00:12:27.672 Message: 00:12:27.672 ================= 00:12:27.672 Applications Enabled 00:12:27.672 ================= 00:12:27.672 00:12:27.672 apps: 00:12:27.672 00:12:27.672 00:12:27.672 Message: 00:12:27.672 ================= 00:12:27.672 Libraries Enabled 00:12:27.672 ================= 00:12:27.672 00:12:27.672 libs: 00:12:27.672 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:12:27.672 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:12:27.672 cryptodev, dmadev, power, reorder, security, vhost, 00:12:27.672 00:12:27.672 Message: 00:12:27.672 =============== 00:12:27.672 Drivers Enabled 00:12:27.672 =============== 00:12:27.672 00:12:27.672 common: 00:12:27.672 00:12:27.672 bus: 00:12:27.672 pci, vdev, 00:12:27.672 mempool: 00:12:27.672 ring, 00:12:27.672 dma: 00:12:27.672 00:12:27.672 net: 00:12:27.672 00:12:27.672 crypto: 00:12:27.672 00:12:27.672 compress: 00:12:27.672 00:12:27.672 vdpa: 00:12:27.672 00:12:27.672 00:12:27.672 Message: 00:12:27.672 ================= 00:12:27.672 Content Skipped 00:12:27.672 ================= 00:12:27.672 00:12:27.672 apps: 00:12:27.672 dumpcap: explicitly disabled via build config 00:12:27.673 graph: explicitly disabled via build config 00:12:27.673 pdump: explicitly disabled via build config 00:12:27.673 proc-info: explicitly disabled via build config 00:12:27.673 test-acl: explicitly disabled via build config 00:12:27.673 test-bbdev: explicitly disabled via build config 00:12:27.673 test-cmdline: explicitly disabled via build config 00:12:27.673 test-compress-perf: explicitly disabled via build config 00:12:27.673 test-crypto-perf: explicitly disabled via build config 00:12:27.673 test-dma-perf: explicitly disabled via build config 00:12:27.673 test-eventdev: explicitly disabled via build config 00:12:27.673 test-fib: explicitly disabled via build config 00:12:27.673 test-flow-perf: explicitly disabled via build config 00:12:27.673 test-gpudev: explicitly disabled via build config 00:12:27.673 test-mldev: explicitly disabled via build config 00:12:27.673 test-pipeline: explicitly disabled via build config 00:12:27.673 test-pmd: explicitly disabled via build config 00:12:27.673 test-regex: explicitly disabled via build config 00:12:27.673 test-sad: explicitly disabled via build config 00:12:27.673 test-security-perf: explicitly disabled via build config 00:12:27.673 00:12:27.673 libs: 00:12:27.673 argparse: explicitly disabled via build config 00:12:27.673 metrics: explicitly disabled via build config 00:12:27.673 acl: explicitly disabled via build config 00:12:27.673 bbdev: explicitly disabled via build config 00:12:27.673 bitratestats: explicitly disabled via build config 00:12:27.673 bpf: explicitly disabled via build config 00:12:27.673 cfgfile: explicitly disabled via build config 00:12:27.673 distributor: explicitly disabled via build config 00:12:27.673 efd: explicitly disabled via build config 00:12:27.673 eventdev: explicitly disabled via build config 00:12:27.673 dispatcher: explicitly disabled via build config 00:12:27.673 gpudev: explicitly disabled via build config 00:12:27.673 gro: explicitly disabled via build config 00:12:27.673 gso: explicitly disabled via build config 00:12:27.673 ip_frag: explicitly disabled via build config 00:12:27.673 jobstats: explicitly disabled via build config 00:12:27.673 latencystats: explicitly disabled via build config 00:12:27.673 lpm: explicitly disabled via build config 00:12:27.673 member: explicitly disabled via build config 00:12:27.673 pcapng: explicitly disabled via build config 00:12:27.673 rawdev: explicitly disabled via build config 00:12:27.673 regexdev: explicitly disabled via build config 00:12:27.673 mldev: explicitly disabled via build config 00:12:27.673 rib: explicitly disabled via build config 00:12:27.673 sched: explicitly disabled via build config 00:12:27.673 stack: explicitly disabled via build config 00:12:27.673 ipsec: explicitly disabled via build config 00:12:27.673 pdcp: explicitly disabled via build config 00:12:27.673 fib: explicitly disabled via build config 00:12:27.673 port: explicitly disabled via build config 00:12:27.673 pdump: explicitly disabled via build config 00:12:27.673 table: explicitly disabled via build config 00:12:27.673 pipeline: explicitly disabled via build config 00:12:27.673 graph: explicitly disabled via build config 00:12:27.673 node: explicitly disabled via build config 00:12:27.673 00:12:27.673 drivers: 00:12:27.673 common/cpt: not in enabled drivers build config 00:12:27.673 common/dpaax: not in enabled drivers build config 00:12:27.673 common/iavf: not in enabled drivers build config 00:12:27.673 common/idpf: not in enabled drivers build config 00:12:27.673 common/ionic: not in enabled drivers build config 00:12:27.673 common/mvep: not in enabled drivers build config 00:12:27.673 common/octeontx: not in enabled drivers build config 00:12:27.673 bus/auxiliary: not in enabled drivers build config 00:12:27.673 bus/cdx: not in enabled drivers build config 00:12:27.673 bus/dpaa: not in enabled drivers build config 00:12:27.673 bus/fslmc: not in enabled drivers build config 00:12:27.673 bus/ifpga: not in enabled drivers build config 00:12:27.673 bus/platform: not in enabled drivers build config 00:12:27.673 bus/uacce: not in enabled drivers build config 00:12:27.673 bus/vmbus: not in enabled drivers build config 00:12:27.673 common/cnxk: not in enabled drivers build config 00:12:27.673 common/mlx5: not in enabled drivers build config 00:12:27.673 common/nfp: not in enabled drivers build config 00:12:27.673 common/nitrox: not in enabled drivers build config 00:12:27.673 common/qat: not in enabled drivers build config 00:12:27.673 common/sfc_efx: not in enabled drivers build config 00:12:27.673 mempool/bucket: not in enabled drivers build config 00:12:27.673 mempool/cnxk: not in enabled drivers build config 00:12:27.673 mempool/dpaa: not in enabled drivers build config 00:12:27.673 mempool/dpaa2: not in enabled drivers build config 00:12:27.673 mempool/octeontx: not in enabled drivers build config 00:12:27.673 mempool/stack: not in enabled drivers build config 00:12:27.673 dma/cnxk: not in enabled drivers build config 00:12:27.673 dma/dpaa: not in enabled drivers build config 00:12:27.673 dma/dpaa2: not in enabled drivers build config 00:12:27.673 dma/hisilicon: not in enabled drivers build config 00:12:27.673 dma/idxd: not in enabled drivers build config 00:12:27.673 dma/ioat: not in enabled drivers build config 00:12:27.673 dma/skeleton: not in enabled drivers build config 00:12:27.673 net/af_packet: not in enabled drivers build config 00:12:27.673 net/af_xdp: not in enabled drivers build config 00:12:27.673 net/ark: not in enabled drivers build config 00:12:27.673 net/atlantic: not in enabled drivers build config 00:12:27.673 net/avp: not in enabled drivers build config 00:12:27.673 net/axgbe: not in enabled drivers build config 00:12:27.673 net/bnx2x: not in enabled drivers build config 00:12:27.673 net/bnxt: not in enabled drivers build config 00:12:27.673 net/bonding: not in enabled drivers build config 00:12:27.673 net/cnxk: not in enabled drivers build config 00:12:27.673 net/cpfl: not in enabled drivers build config 00:12:27.673 net/cxgbe: not in enabled drivers build config 00:12:27.673 net/dpaa: not in enabled drivers build config 00:12:27.673 net/dpaa2: not in enabled drivers build config 00:12:27.673 net/e1000: not in enabled drivers build config 00:12:27.673 net/ena: not in enabled drivers build config 00:12:27.673 net/enetc: not in enabled drivers build config 00:12:27.673 net/enetfec: not in enabled drivers build config 00:12:27.673 net/enic: not in enabled drivers build config 00:12:27.673 net/failsafe: not in enabled drivers build config 00:12:27.673 net/fm10k: not in enabled drivers build config 00:12:27.673 net/gve: not in enabled drivers build config 00:12:27.673 net/hinic: not in enabled drivers build config 00:12:27.673 net/hns3: not in enabled drivers build config 00:12:27.673 net/i40e: not in enabled drivers build config 00:12:27.673 net/iavf: not in enabled drivers build config 00:12:27.673 net/ice: not in enabled drivers build config 00:12:27.673 net/idpf: not in enabled drivers build config 00:12:27.673 net/igc: not in enabled drivers build config 00:12:27.673 net/ionic: not in enabled drivers build config 00:12:27.673 net/ipn3ke: not in enabled drivers build config 00:12:27.673 net/ixgbe: not in enabled drivers build config 00:12:27.673 net/mana: not in enabled drivers build config 00:12:27.673 net/memif: not in enabled drivers build config 00:12:27.673 net/mlx4: not in enabled drivers build config 00:12:27.673 net/mlx5: not in enabled drivers build config 00:12:27.673 net/mvneta: not in enabled drivers build config 00:12:27.673 net/mvpp2: not in enabled drivers build config 00:12:27.673 net/netvsc: not in enabled drivers build config 00:12:27.673 net/nfb: not in enabled drivers build config 00:12:27.673 net/nfp: not in enabled drivers build config 00:12:27.673 net/ngbe: not in enabled drivers build config 00:12:27.673 net/null: not in enabled drivers build config 00:12:27.673 net/octeontx: not in enabled drivers build config 00:12:27.673 net/octeon_ep: not in enabled drivers build config 00:12:27.673 net/pcap: not in enabled drivers build config 00:12:27.673 net/pfe: not in enabled drivers build config 00:12:27.673 net/qede: not in enabled drivers build config 00:12:27.673 net/ring: not in enabled drivers build config 00:12:27.673 net/sfc: not in enabled drivers build config 00:12:27.673 net/softnic: not in enabled drivers build config 00:12:27.673 net/tap: not in enabled drivers build config 00:12:27.673 net/thunderx: not in enabled drivers build config 00:12:27.673 net/txgbe: not in enabled drivers build config 00:12:27.673 net/vdev_netvsc: not in enabled drivers build config 00:12:27.673 net/vhost: not in enabled drivers build config 00:12:27.673 net/virtio: not in enabled drivers build config 00:12:27.673 net/vmxnet3: not in enabled drivers build config 00:12:27.673 raw/*: missing internal dependency, "rawdev" 00:12:27.673 crypto/armv8: not in enabled drivers build config 00:12:27.673 crypto/bcmfs: not in enabled drivers build config 00:12:27.673 crypto/caam_jr: not in enabled drivers build config 00:12:27.673 crypto/ccp: not in enabled drivers build config 00:12:27.673 crypto/cnxk: not in enabled drivers build config 00:12:27.673 crypto/dpaa_sec: not in enabled drivers build config 00:12:27.673 crypto/dpaa2_sec: not in enabled drivers build config 00:12:27.673 crypto/ipsec_mb: not in enabled drivers build config 00:12:27.673 crypto/mlx5: not in enabled drivers build config 00:12:27.673 crypto/mvsam: not in enabled drivers build config 00:12:27.673 crypto/nitrox: not in enabled drivers build config 00:12:27.673 crypto/null: not in enabled drivers build config 00:12:27.673 crypto/octeontx: not in enabled drivers build config 00:12:27.673 crypto/openssl: not in enabled drivers build config 00:12:27.673 crypto/scheduler: not in enabled drivers build config 00:12:27.673 crypto/uadk: not in enabled drivers build config 00:12:27.673 crypto/virtio: not in enabled drivers build config 00:12:27.673 compress/isal: not in enabled drivers build config 00:12:27.673 compress/mlx5: not in enabled drivers build config 00:12:27.673 compress/nitrox: not in enabled drivers build config 00:12:27.673 compress/octeontx: not in enabled drivers build config 00:12:27.673 compress/zlib: not in enabled drivers build config 00:12:27.673 regex/*: missing internal dependency, "regexdev" 00:12:27.673 ml/*: missing internal dependency, "mldev" 00:12:27.673 vdpa/ifc: not in enabled drivers build config 00:12:27.673 vdpa/mlx5: not in enabled drivers build config 00:12:27.673 vdpa/nfp: not in enabled drivers build config 00:12:27.673 vdpa/sfc: not in enabled drivers build config 00:12:27.673 event/*: missing internal dependency, "eventdev" 00:12:27.673 baseband/*: missing internal dependency, "bbdev" 00:12:27.673 gpu/*: missing internal dependency, "gpudev" 00:12:27.673 00:12:27.673 00:12:27.673 Build targets in project: 85 00:12:27.673 00:12:27.673 DPDK 24.03.0 00:12:27.673 00:12:27.674 User defined options 00:12:27.674 buildtype : debug 00:12:27.674 default_library : shared 00:12:27.674 libdir : lib 00:12:27.674 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:12:27.674 b_sanitize : address 00:12:27.674 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:12:27.674 c_link_args : 00:12:27.674 cpu_instruction_set: native 00:12:27.674 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:12:27.674 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:12:27.674 enable_docs : false 00:12:27.674 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:12:27.674 enable_kmods : false 00:12:27.674 max_lcores : 128 00:12:27.674 tests : false 00:12:27.674 00:12:27.674 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:12:27.674 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:12:27.674 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:12:27.674 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:12:27.674 [3/268] Linking static target lib/librte_kvargs.a 00:12:27.674 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:12:27.674 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:12:27.932 [6/268] Linking static target lib/librte_log.a 00:12:28.191 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:12:28.450 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:12:28.450 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:12:28.450 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:12:28.709 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:12:28.709 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:12:28.709 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:12:28.709 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:12:28.709 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:12:28.709 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:12:28.968 [17/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:12:28.968 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:12:28.968 [19/268] Linking static target lib/librte_telemetry.a 00:12:28.968 [20/268] Linking target lib/librte_log.so.24.1 00:12:29.227 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:12:29.227 [22/268] Linking target lib/librte_kvargs.so.24.1 00:12:29.486 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:12:29.486 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:12:29.486 [25/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:12:29.486 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:12:29.486 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:12:29.744 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:12:29.744 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:12:29.744 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:12:29.744 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:12:29.744 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:12:30.003 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:12:30.003 [34/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:12:30.003 [35/268] Linking target lib/librte_telemetry.so.24.1 00:12:30.261 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:12:30.261 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:12:30.261 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:12:30.520 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:12:30.520 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:12:30.520 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:12:30.778 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:12:30.779 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:12:30.779 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:12:30.779 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:12:30.779 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:12:30.779 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:12:31.037 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:12:31.037 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:12:31.296 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:12:31.554 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:12:31.554 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:12:31.813 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:12:31.813 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:12:31.813 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:12:31.813 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:12:31.813 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:12:32.071 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:12:32.071 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:12:32.071 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:12:32.352 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:12:32.611 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:12:32.611 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:12:32.611 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:12:32.611 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:12:32.869 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:12:32.869 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:12:32.869 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:12:32.869 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:12:33.126 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:12:33.126 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:12:33.126 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:12:33.126 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:12:33.126 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:12:33.126 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:12:33.385 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:12:33.385 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:12:33.642 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:12:33.642 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:12:33.642 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:12:33.642 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:12:33.642 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:12:33.901 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:12:34.160 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:12:34.160 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:12:34.160 [86/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:12:34.160 [87/268] Linking static target lib/librte_eal.a 00:12:34.160 [88/268] Linking static target lib/librte_rcu.a 00:12:34.160 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:12:34.160 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:12:34.418 [91/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:12:34.418 [92/268] Linking static target lib/librte_ring.a 00:12:34.418 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:12:34.418 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:12:34.677 [95/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:12:34.677 [96/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:12:34.677 [97/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:12:34.935 [98/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:12:34.935 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:12:34.935 [100/268] Linking static target lib/librte_mempool.a 00:12:34.935 [101/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:12:35.194 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:12:35.194 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:12:35.194 [104/268] Linking static target lib/librte_mbuf.a 00:12:35.194 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:12:35.194 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:12:35.452 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:12:35.452 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:12:35.452 [109/268] Linking static target lib/librte_net.a 00:12:35.711 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:12:35.711 [111/268] Linking static target lib/librte_meter.a 00:12:35.711 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:12:35.969 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:12:35.969 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:12:35.969 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:12:35.969 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:12:35.969 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:12:36.228 [118/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:12:36.228 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:12:36.796 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:12:36.796 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:12:37.055 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:12:37.055 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:12:37.055 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:12:37.313 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:12:37.313 [126/268] Linking static target lib/librte_pci.a 00:12:37.313 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:12:37.313 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:12:37.313 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:12:37.313 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:12:37.572 [131/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:12:37.573 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:12:37.573 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:12:37.573 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:12:37.832 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:12:37.832 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:12:37.832 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:12:37.832 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:12:37.832 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:12:37.832 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:12:37.832 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:12:37.832 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:12:37.832 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:12:38.091 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:12:38.091 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:12:38.350 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:12:38.350 [147/268] Linking static target lib/librte_cmdline.a 00:12:38.623 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:12:38.623 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:12:38.623 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:12:38.882 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:12:38.882 [152/268] Linking static target lib/librte_timer.a 00:12:38.882 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:12:39.141 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:12:39.141 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:12:39.399 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:12:39.399 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:12:39.399 [158/268] Linking static target lib/librte_hash.a 00:12:39.659 [159/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:12:39.659 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:12:39.659 [161/268] Linking static target lib/librte_compressdev.a 00:12:39.659 [162/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:12:39.659 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:12:39.659 [164/268] Linking static target lib/librte_ethdev.a 00:12:39.659 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:12:40.227 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:12:40.227 [167/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:12:40.227 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:12:40.227 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:12:40.227 [170/268] Linking static target lib/librte_dmadev.a 00:12:40.227 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:12:40.227 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:12:40.486 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:12:40.486 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:12:40.745 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:12:41.003 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:12:41.003 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:12:41.003 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:12:41.262 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:12:41.262 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:12:41.262 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:12:41.262 [182/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:12:41.262 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:12:41.262 [184/268] Linking static target lib/librte_cryptodev.a 00:12:41.520 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:12:41.520 [186/268] Linking static target lib/librte_power.a 00:12:42.086 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:12:42.086 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:12:42.086 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:12:42.086 [190/268] Linking static target lib/librte_reorder.a 00:12:42.086 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:12:42.344 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:12:42.344 [193/268] Linking static target lib/librte_security.a 00:12:42.602 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:12:42.861 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:12:42.861 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:12:43.119 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:12:43.378 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:12:43.378 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:12:43.378 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:12:43.649 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:12:43.649 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:12:43.649 [203/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:12:43.908 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:12:43.908 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:12:44.166 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:12:44.166 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:12:44.425 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:12:44.425 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:12:44.425 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:12:44.425 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:12:44.685 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:12:44.685 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:12:44.685 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:12:44.685 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:12:44.685 [216/268] Linking static target drivers/librte_bus_vdev.a 00:12:44.685 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:12:44.685 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:12:44.685 [219/268] Linking static target drivers/librte_bus_pci.a 00:12:44.947 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:12:44.947 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:12:44.947 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:12:44.947 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:12:45.206 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:12:45.206 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:12:45.206 [226/268] Linking static target drivers/librte_mempool_ring.a 00:12:45.206 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:12:46.141 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:12:46.141 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:12:46.141 [230/268] Linking target lib/librte_eal.so.24.1 00:12:46.401 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:12:46.401 [232/268] Linking target lib/librte_ring.so.24.1 00:12:46.401 [233/268] Linking target lib/librte_timer.so.24.1 00:12:46.401 [234/268] Linking target lib/librte_meter.so.24.1 00:12:46.401 [235/268] Linking target lib/librte_pci.so.24.1 00:12:46.401 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:12:46.401 [237/268] Linking target lib/librte_dmadev.so.24.1 00:12:46.660 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:12:46.660 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:12:46.660 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:12:46.660 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:12:46.660 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:12:46.660 [243/268] Linking target lib/librte_mempool.so.24.1 00:12:46.660 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:12:46.660 [245/268] Linking target lib/librte_rcu.so.24.1 00:12:46.660 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:12:46.660 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:12:46.918 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:12:46.918 [249/268] Linking target lib/librte_mbuf.so.24.1 00:12:46.918 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:12:46.919 [251/268] Linking target lib/librte_reorder.so.24.1 00:12:46.919 [252/268] Linking target lib/librte_net.so.24.1 00:12:46.919 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:12:46.919 [254/268] Linking target lib/librte_compressdev.so.24.1 00:12:47.178 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:12:47.178 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:12:47.178 [257/268] Linking target lib/librte_security.so.24.1 00:12:47.178 [258/268] Linking target lib/librte_cmdline.so.24.1 00:12:47.178 [259/268] Linking target lib/librte_hash.so.24.1 00:12:47.436 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:12:47.695 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:12:47.695 [262/268] Linking target lib/librte_ethdev.so.24.1 00:12:47.955 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:12:47.955 [264/268] Linking target lib/librte_power.so.24.1 00:12:50.490 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:12:50.490 [266/268] Linking static target lib/librte_vhost.a 00:12:51.929 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:12:52.187 [268/268] Linking target lib/librte_vhost.so.24.1 00:12:52.187 INFO: autodetecting backend as ninja 00:12:52.187 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:13:14.169 CC lib/log/log.o 00:13:14.169 CC lib/log/log_flags.o 00:13:14.169 CC lib/ut/ut.o 00:13:14.169 CC lib/log/log_deprecated.o 00:13:14.169 CC lib/ut_mock/mock.o 00:13:14.169 LIB libspdk_ut_mock.a 00:13:14.169 LIB libspdk_ut.a 00:13:14.169 SO libspdk_ut_mock.so.6.0 00:13:14.169 SO libspdk_ut.so.2.0 00:13:14.169 LIB libspdk_log.a 00:13:14.169 SYMLINK libspdk_ut_mock.so 00:13:14.169 SYMLINK libspdk_ut.so 00:13:14.169 SO libspdk_log.so.7.1 00:13:14.169 SYMLINK libspdk_log.so 00:13:14.169 CXX lib/trace_parser/trace.o 00:13:14.169 CC lib/util/base64.o 00:13:14.170 CC lib/util/bit_array.o 00:13:14.170 CC lib/util/cpuset.o 00:13:14.170 CC lib/util/crc16.o 00:13:14.170 CC lib/util/crc32.o 00:13:14.170 CC lib/ioat/ioat.o 00:13:14.170 CC lib/util/crc32c.o 00:13:14.170 CC lib/dma/dma.o 00:13:14.428 CC lib/vfio_user/host/vfio_user_pci.o 00:13:14.428 CC lib/util/crc32_ieee.o 00:13:14.428 CC lib/util/crc64.o 00:13:14.428 CC lib/util/dif.o 00:13:14.428 CC lib/vfio_user/host/vfio_user.o 00:13:14.428 CC lib/util/fd.o 00:13:14.686 CC lib/util/fd_group.o 00:13:14.686 LIB libspdk_dma.a 00:13:14.686 SO libspdk_dma.so.5.0 00:13:14.686 CC lib/util/file.o 00:13:14.686 CC lib/util/hexlify.o 00:13:14.686 SYMLINK libspdk_dma.so 00:13:14.686 CC lib/util/iov.o 00:13:14.686 LIB libspdk_ioat.a 00:13:14.686 CC lib/util/math.o 00:13:14.686 SO libspdk_ioat.so.7.0 00:13:14.686 CC lib/util/net.o 00:13:14.946 SYMLINK libspdk_ioat.so 00:13:14.946 CC lib/util/pipe.o 00:13:14.946 CC lib/util/strerror_tls.o 00:13:14.946 LIB libspdk_vfio_user.a 00:13:14.946 CC lib/util/string.o 00:13:14.946 CC lib/util/uuid.o 00:13:14.946 CC lib/util/xor.o 00:13:14.946 SO libspdk_vfio_user.so.5.0 00:13:14.946 SYMLINK libspdk_vfio_user.so 00:13:14.946 CC lib/util/zipf.o 00:13:14.946 CC lib/util/md5.o 00:13:15.204 LIB libspdk_util.a 00:13:15.463 LIB libspdk_trace_parser.a 00:13:15.463 SO libspdk_util.so.10.1 00:13:15.463 SO libspdk_trace_parser.so.6.0 00:13:15.463 SYMLINK libspdk_trace_parser.so 00:13:15.463 SYMLINK libspdk_util.so 00:13:15.721 CC lib/json/json_parse.o 00:13:15.721 CC lib/rdma_utils/rdma_utils.o 00:13:15.721 CC lib/json/json_write.o 00:13:15.721 CC lib/json/json_util.o 00:13:15.721 CC lib/env_dpdk/env.o 00:13:15.721 CC lib/vmd/led.o 00:13:15.721 CC lib/env_dpdk/memory.o 00:13:15.721 CC lib/idxd/idxd.o 00:13:15.721 CC lib/vmd/vmd.o 00:13:15.721 CC lib/conf/conf.o 00:13:15.978 CC lib/env_dpdk/pci.o 00:13:15.978 LIB libspdk_conf.a 00:13:15.978 CC lib/env_dpdk/init.o 00:13:15.978 CC lib/idxd/idxd_user.o 00:13:15.978 SO libspdk_conf.so.6.0 00:13:15.978 LIB libspdk_rdma_utils.a 00:13:15.978 SO libspdk_rdma_utils.so.1.0 00:13:15.978 SYMLINK libspdk_conf.so 00:13:15.978 CC lib/idxd/idxd_kernel.o 00:13:15.978 LIB libspdk_json.a 00:13:16.236 SYMLINK libspdk_rdma_utils.so 00:13:16.236 CC lib/env_dpdk/threads.o 00:13:16.236 SO libspdk_json.so.6.0 00:13:16.236 SYMLINK libspdk_json.so 00:13:16.236 CC lib/env_dpdk/pci_ioat.o 00:13:16.236 CC lib/env_dpdk/pci_virtio.o 00:13:16.494 CC lib/env_dpdk/pci_vmd.o 00:13:16.494 CC lib/rdma_provider/common.o 00:13:16.494 CC lib/rdma_provider/rdma_provider_verbs.o 00:13:16.494 CC lib/env_dpdk/pci_idxd.o 00:13:16.494 CC lib/jsonrpc/jsonrpc_server.o 00:13:16.494 LIB libspdk_idxd.a 00:13:16.494 CC lib/env_dpdk/pci_event.o 00:13:16.494 SO libspdk_idxd.so.12.1 00:13:16.494 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:13:16.494 CC lib/jsonrpc/jsonrpc_client.o 00:13:16.494 LIB libspdk_vmd.a 00:13:16.753 SYMLINK libspdk_idxd.so 00:13:16.753 SO libspdk_vmd.so.6.0 00:13:16.753 CC lib/env_dpdk/sigbus_handler.o 00:13:16.753 CC lib/env_dpdk/pci_dpdk.o 00:13:16.753 CC lib/env_dpdk/pci_dpdk_2207.o 00:13:16.753 SYMLINK libspdk_vmd.so 00:13:16.753 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:13:16.753 CC lib/env_dpdk/pci_dpdk_2211.o 00:13:16.753 LIB libspdk_rdma_provider.a 00:13:16.753 SO libspdk_rdma_provider.so.7.0 00:13:17.011 SYMLINK libspdk_rdma_provider.so 00:13:17.011 LIB libspdk_jsonrpc.a 00:13:17.011 SO libspdk_jsonrpc.so.6.0 00:13:17.269 SYMLINK libspdk_jsonrpc.so 00:13:17.528 CC lib/rpc/rpc.o 00:13:17.528 LIB libspdk_env_dpdk.a 00:13:17.786 LIB libspdk_rpc.a 00:13:17.786 SO libspdk_rpc.so.6.0 00:13:17.786 SO libspdk_env_dpdk.so.15.1 00:13:17.786 SYMLINK libspdk_rpc.so 00:13:17.786 SYMLINK libspdk_env_dpdk.so 00:13:18.045 CC lib/keyring/keyring_rpc.o 00:13:18.045 CC lib/keyring/keyring.o 00:13:18.045 CC lib/trace/trace.o 00:13:18.045 CC lib/trace/trace_flags.o 00:13:18.045 CC lib/trace/trace_rpc.o 00:13:18.045 CC lib/notify/notify.o 00:13:18.045 CC lib/notify/notify_rpc.o 00:13:18.303 LIB libspdk_notify.a 00:13:18.303 LIB libspdk_keyring.a 00:13:18.303 SO libspdk_keyring.so.2.0 00:13:18.303 SO libspdk_notify.so.6.0 00:13:18.303 LIB libspdk_trace.a 00:13:18.303 SYMLINK libspdk_notify.so 00:13:18.303 SYMLINK libspdk_keyring.so 00:13:18.303 SO libspdk_trace.so.11.0 00:13:18.561 SYMLINK libspdk_trace.so 00:13:18.819 CC lib/thread/thread.o 00:13:18.819 CC lib/thread/iobuf.o 00:13:18.819 CC lib/sock/sock_rpc.o 00:13:18.819 CC lib/sock/sock.o 00:13:19.386 LIB libspdk_sock.a 00:13:19.386 SO libspdk_sock.so.10.0 00:13:19.386 SYMLINK libspdk_sock.so 00:13:19.667 CC lib/nvme/nvme_ctrlr.o 00:13:19.667 CC lib/nvme/nvme_ctrlr_cmd.o 00:13:19.667 CC lib/nvme/nvme_fabric.o 00:13:19.667 CC lib/nvme/nvme_ns_cmd.o 00:13:19.667 CC lib/nvme/nvme_ns.o 00:13:19.667 CC lib/nvme/nvme_pcie_common.o 00:13:19.667 CC lib/nvme/nvme_pcie.o 00:13:19.667 CC lib/nvme/nvme_qpair.o 00:13:19.667 CC lib/nvme/nvme.o 00:13:20.599 CC lib/nvme/nvme_quirks.o 00:13:20.857 CC lib/nvme/nvme_transport.o 00:13:20.857 LIB libspdk_thread.a 00:13:20.857 SO libspdk_thread.so.11.0 00:13:21.114 SYMLINK libspdk_thread.so 00:13:21.114 CC lib/nvme/nvme_discovery.o 00:13:21.114 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:13:21.371 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:13:21.371 CC lib/nvme/nvme_tcp.o 00:13:21.629 CC lib/nvme/nvme_opal.o 00:13:21.629 CC lib/nvme/nvme_io_msg.o 00:13:21.629 CC lib/nvme/nvme_poll_group.o 00:13:21.629 CC lib/nvme/nvme_zns.o 00:13:21.629 CC lib/nvme/nvme_stubs.o 00:13:21.629 CC lib/nvme/nvme_auth.o 00:13:21.887 CC lib/nvme/nvme_cuse.o 00:13:22.145 CC lib/nvme/nvme_rdma.o 00:13:22.402 CC lib/accel/accel.o 00:13:22.660 CC lib/accel/accel_rpc.o 00:13:22.660 CC lib/blob/blobstore.o 00:13:22.660 CC lib/blob/request.o 00:13:22.660 CC lib/blob/zeroes.o 00:13:22.918 CC lib/blob/blob_bs_dev.o 00:13:22.918 CC lib/accel/accel_sw.o 00:13:23.176 CC lib/init/json_config.o 00:13:23.176 CC lib/init/subsystem.o 00:13:23.176 CC lib/init/subsystem_rpc.o 00:13:23.435 CC lib/virtio/virtio.o 00:13:23.435 CC lib/fsdev/fsdev.o 00:13:23.435 CC lib/fsdev/fsdev_io.o 00:13:23.435 CC lib/init/rpc.o 00:13:23.435 CC lib/fsdev/fsdev_rpc.o 00:13:23.435 CC lib/virtio/virtio_vhost_user.o 00:13:23.435 CC lib/virtio/virtio_vfio_user.o 00:13:23.693 LIB libspdk_init.a 00:13:23.693 SO libspdk_init.so.6.0 00:13:23.693 SYMLINK libspdk_init.so 00:13:23.693 CC lib/virtio/virtio_pci.o 00:13:23.952 CC lib/event/app.o 00:13:23.952 CC lib/event/reactor.o 00:13:23.952 CC lib/event/log_rpc.o 00:13:23.952 CC lib/event/app_rpc.o 00:13:23.952 CC lib/event/scheduler_static.o 00:13:23.952 LIB libspdk_accel.a 00:13:24.210 SO libspdk_accel.so.16.0 00:13:24.210 SYMLINK libspdk_accel.so 00:13:24.211 LIB libspdk_fsdev.a 00:13:24.211 LIB libspdk_virtio.a 00:13:24.468 LIB libspdk_nvme.a 00:13:24.468 SO libspdk_fsdev.so.2.0 00:13:24.468 SO libspdk_virtio.so.7.0 00:13:24.468 CC lib/bdev/bdev.o 00:13:24.468 CC lib/bdev/bdev_rpc.o 00:13:24.468 CC lib/bdev/bdev_zone.o 00:13:24.468 CC lib/bdev/part.o 00:13:24.468 SYMLINK libspdk_fsdev.so 00:13:24.468 SYMLINK libspdk_virtio.so 00:13:24.468 CC lib/bdev/scsi_nvme.o 00:13:24.726 SO libspdk_nvme.so.15.0 00:13:24.726 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:13:24.726 LIB libspdk_event.a 00:13:24.985 SO libspdk_event.so.14.0 00:13:24.985 SYMLINK libspdk_nvme.so 00:13:24.985 SYMLINK libspdk_event.so 00:13:25.551 LIB libspdk_fuse_dispatcher.a 00:13:25.551 SO libspdk_fuse_dispatcher.so.1.0 00:13:25.810 SYMLINK libspdk_fuse_dispatcher.so 00:13:27.710 LIB libspdk_blob.a 00:13:27.710 SO libspdk_blob.so.11.0 00:13:27.710 SYMLINK libspdk_blob.so 00:13:27.967 CC lib/blobfs/tree.o 00:13:27.967 CC lib/blobfs/blobfs.o 00:13:27.967 CC lib/lvol/lvol.o 00:13:28.223 LIB libspdk_bdev.a 00:13:28.223 SO libspdk_bdev.so.17.0 00:13:28.480 SYMLINK libspdk_bdev.so 00:13:28.480 CC lib/ublk/ublk.o 00:13:28.480 CC lib/ublk/ublk_rpc.o 00:13:28.480 CC lib/ftl/ftl_core.o 00:13:28.480 CC lib/ftl/ftl_init.o 00:13:28.480 CC lib/ftl/ftl_layout.o 00:13:28.480 CC lib/scsi/dev.o 00:13:28.737 CC lib/nbd/nbd.o 00:13:28.737 CC lib/nvmf/ctrlr.o 00:13:28.737 CC lib/nvmf/ctrlr_discovery.o 00:13:28.995 CC lib/nvmf/ctrlr_bdev.o 00:13:28.995 CC lib/scsi/lun.o 00:13:28.995 CC lib/ftl/ftl_debug.o 00:13:28.995 LIB libspdk_blobfs.a 00:13:28.995 CC lib/nbd/nbd_rpc.o 00:13:28.995 SO libspdk_blobfs.so.10.0 00:13:29.254 CC lib/scsi/port.o 00:13:29.254 SYMLINK libspdk_blobfs.so 00:13:29.254 CC lib/ftl/ftl_io.o 00:13:29.254 LIB libspdk_lvol.a 00:13:29.254 SO libspdk_lvol.so.10.0 00:13:29.254 LIB libspdk_nbd.a 00:13:29.254 SYMLINK libspdk_lvol.so 00:13:29.254 SO libspdk_nbd.so.7.0 00:13:29.254 CC lib/ftl/ftl_sb.o 00:13:29.254 CC lib/ftl/ftl_l2p.o 00:13:29.254 CC lib/ftl/ftl_l2p_flat.o 00:13:29.254 CC lib/scsi/scsi.o 00:13:29.254 SYMLINK libspdk_nbd.so 00:13:29.254 CC lib/ftl/ftl_nv_cache.o 00:13:29.511 LIB libspdk_ublk.a 00:13:29.511 SO libspdk_ublk.so.3.0 00:13:29.511 CC lib/nvmf/subsystem.o 00:13:29.511 CC lib/nvmf/nvmf.o 00:13:29.511 CC lib/scsi/scsi_bdev.o 00:13:29.511 SYMLINK libspdk_ublk.so 00:13:29.511 CC lib/scsi/scsi_pr.o 00:13:29.511 CC lib/scsi/scsi_rpc.o 00:13:29.511 CC lib/ftl/ftl_band.o 00:13:29.511 CC lib/ftl/ftl_band_ops.o 00:13:29.770 CC lib/scsi/task.o 00:13:29.770 CC lib/nvmf/nvmf_rpc.o 00:13:30.097 CC lib/ftl/ftl_writer.o 00:13:30.097 CC lib/ftl/ftl_rq.o 00:13:30.097 CC lib/nvmf/transport.o 00:13:30.097 CC lib/nvmf/tcp.o 00:13:30.097 LIB libspdk_scsi.a 00:13:30.097 CC lib/nvmf/stubs.o 00:13:30.373 SO libspdk_scsi.so.9.0 00:13:30.373 CC lib/nvmf/mdns_server.o 00:13:30.373 SYMLINK libspdk_scsi.so 00:13:30.373 CC lib/nvmf/rdma.o 00:13:30.631 CC lib/ftl/ftl_reloc.o 00:13:30.631 CC lib/ftl/ftl_l2p_cache.o 00:13:30.631 CC lib/nvmf/auth.o 00:13:30.888 CC lib/ftl/ftl_p2l.o 00:13:31.146 CC lib/ftl/ftl_p2l_log.o 00:13:31.146 CC lib/ftl/mngt/ftl_mngt.o 00:13:31.146 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:13:31.404 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:13:31.404 CC lib/iscsi/conn.o 00:13:31.404 CC lib/ftl/mngt/ftl_mngt_startup.o 00:13:31.404 CC lib/ftl/mngt/ftl_mngt_md.o 00:13:31.404 CC lib/ftl/mngt/ftl_mngt_misc.o 00:13:31.404 CC lib/iscsi/init_grp.o 00:13:31.404 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:13:31.662 CC lib/vhost/vhost.o 00:13:31.662 CC lib/vhost/vhost_rpc.o 00:13:31.662 CC lib/vhost/vhost_scsi.o 00:13:31.662 CC lib/iscsi/iscsi.o 00:13:31.662 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:13:31.662 CC lib/ftl/mngt/ftl_mngt_band.o 00:13:31.919 CC lib/vhost/vhost_blk.o 00:13:31.919 CC lib/vhost/rte_vhost_user.o 00:13:32.176 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:13:32.177 CC lib/iscsi/param.o 00:13:32.177 CC lib/iscsi/portal_grp.o 00:13:32.177 CC lib/iscsi/tgt_node.o 00:13:32.435 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:13:32.435 CC lib/iscsi/iscsi_subsystem.o 00:13:32.435 CC lib/iscsi/iscsi_rpc.o 00:13:32.694 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:13:32.694 CC lib/iscsi/task.o 00:13:32.694 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:13:32.953 CC lib/ftl/utils/ftl_conf.o 00:13:32.953 CC lib/ftl/utils/ftl_md.o 00:13:32.953 CC lib/ftl/utils/ftl_mempool.o 00:13:32.953 CC lib/ftl/utils/ftl_bitmap.o 00:13:32.953 CC lib/ftl/utils/ftl_property.o 00:13:33.212 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:13:33.212 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:13:33.212 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:13:33.212 LIB libspdk_vhost.a 00:13:33.212 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:13:33.212 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:13:33.212 SO libspdk_vhost.so.8.0 00:13:33.212 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:13:33.212 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:13:33.212 LIB libspdk_nvmf.a 00:13:33.470 CC lib/ftl/upgrade/ftl_sb_v3.o 00:13:33.470 CC lib/ftl/upgrade/ftl_sb_v5.o 00:13:33.470 SYMLINK libspdk_vhost.so 00:13:33.471 CC lib/ftl/nvc/ftl_nvc_dev.o 00:13:33.471 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:13:33.471 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:13:33.471 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:13:33.471 SO libspdk_nvmf.so.20.0 00:13:33.471 CC lib/ftl/base/ftl_base_dev.o 00:13:33.471 CC lib/ftl/base/ftl_base_bdev.o 00:13:33.728 CC lib/ftl/ftl_trace.o 00:13:33.728 LIB libspdk_iscsi.a 00:13:33.728 SO libspdk_iscsi.so.8.0 00:13:33.728 SYMLINK libspdk_nvmf.so 00:13:33.986 LIB libspdk_ftl.a 00:13:33.986 SYMLINK libspdk_iscsi.so 00:13:34.244 SO libspdk_ftl.so.9.0 00:13:34.610 SYMLINK libspdk_ftl.so 00:13:34.872 CC module/env_dpdk/env_dpdk_rpc.o 00:13:34.872 CC module/accel/error/accel_error.o 00:13:35.130 CC module/keyring/file/keyring.o 00:13:35.130 CC module/fsdev/aio/fsdev_aio.o 00:13:35.130 CC module/sock/posix/posix.o 00:13:35.130 CC module/blob/bdev/blob_bdev.o 00:13:35.130 CC module/accel/ioat/accel_ioat.o 00:13:35.130 CC module/accel/iaa/accel_iaa.o 00:13:35.130 CC module/accel/dsa/accel_dsa.o 00:13:35.130 CC module/scheduler/dynamic/scheduler_dynamic.o 00:13:35.130 LIB libspdk_env_dpdk_rpc.a 00:13:35.130 SO libspdk_env_dpdk_rpc.so.6.0 00:13:35.130 SYMLINK libspdk_env_dpdk_rpc.so 00:13:35.130 CC module/accel/dsa/accel_dsa_rpc.o 00:13:35.130 CC module/keyring/file/keyring_rpc.o 00:13:35.130 CC module/accel/ioat/accel_ioat_rpc.o 00:13:35.130 CC module/accel/error/accel_error_rpc.o 00:13:35.130 CC module/accel/iaa/accel_iaa_rpc.o 00:13:35.388 LIB libspdk_scheduler_dynamic.a 00:13:35.388 SO libspdk_scheduler_dynamic.so.4.0 00:13:35.388 LIB libspdk_keyring_file.a 00:13:35.388 LIB libspdk_blob_bdev.a 00:13:35.388 SO libspdk_keyring_file.so.2.0 00:13:35.388 LIB libspdk_accel_ioat.a 00:13:35.388 SYMLINK libspdk_scheduler_dynamic.so 00:13:35.388 LIB libspdk_accel_dsa.a 00:13:35.388 SO libspdk_blob_bdev.so.11.0 00:13:35.388 LIB libspdk_accel_error.a 00:13:35.388 LIB libspdk_accel_iaa.a 00:13:35.388 SO libspdk_accel_dsa.so.5.0 00:13:35.388 SO libspdk_accel_ioat.so.6.0 00:13:35.388 SO libspdk_accel_error.so.2.0 00:13:35.388 SO libspdk_accel_iaa.so.3.0 00:13:35.388 SYMLINK libspdk_keyring_file.so 00:13:35.388 SYMLINK libspdk_blob_bdev.so 00:13:35.388 SYMLINK libspdk_accel_ioat.so 00:13:35.646 SYMLINK libspdk_accel_dsa.so 00:13:35.646 SYMLINK libspdk_accel_error.so 00:13:35.646 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:13:35.646 SYMLINK libspdk_accel_iaa.so 00:13:35.646 CC module/fsdev/aio/fsdev_aio_rpc.o 00:13:35.646 CC module/fsdev/aio/linux_aio_mgr.o 00:13:35.646 CC module/scheduler/gscheduler/gscheduler.o 00:13:35.646 CC module/keyring/linux/keyring.o 00:13:35.646 LIB libspdk_scheduler_dpdk_governor.a 00:13:35.646 SO libspdk_scheduler_dpdk_governor.so.4.0 00:13:35.904 CC module/bdev/error/vbdev_error.o 00:13:35.905 CC module/bdev/delay/vbdev_delay.o 00:13:35.905 LIB libspdk_scheduler_gscheduler.a 00:13:35.905 CC module/blobfs/bdev/blobfs_bdev.o 00:13:35.905 SYMLINK libspdk_scheduler_dpdk_governor.so 00:13:35.905 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:13:35.905 SO libspdk_scheduler_gscheduler.so.4.0 00:13:35.905 CC module/keyring/linux/keyring_rpc.o 00:13:35.905 CC module/bdev/gpt/gpt.o 00:13:35.905 SYMLINK libspdk_scheduler_gscheduler.so 00:13:35.905 LIB libspdk_fsdev_aio.a 00:13:35.905 SO libspdk_fsdev_aio.so.1.0 00:13:35.905 CC module/bdev/lvol/vbdev_lvol.o 00:13:35.905 LIB libspdk_sock_posix.a 00:13:35.905 LIB libspdk_keyring_linux.a 00:13:35.905 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:13:35.905 SO libspdk_sock_posix.so.6.0 00:13:35.905 SO libspdk_keyring_linux.so.1.0 00:13:35.905 LIB libspdk_blobfs_bdev.a 00:13:36.163 SYMLINK libspdk_fsdev_aio.so 00:13:36.163 CC module/bdev/gpt/vbdev_gpt.o 00:13:36.163 SO libspdk_blobfs_bdev.so.6.0 00:13:36.163 CC module/bdev/malloc/bdev_malloc.o 00:13:36.163 SYMLINK libspdk_sock_posix.so 00:13:36.163 SYMLINK libspdk_keyring_linux.so 00:13:36.163 CC module/bdev/malloc/bdev_malloc_rpc.o 00:13:36.163 CC module/bdev/error/vbdev_error_rpc.o 00:13:36.163 CC module/bdev/delay/vbdev_delay_rpc.o 00:13:36.163 SYMLINK libspdk_blobfs_bdev.so 00:13:36.163 LIB libspdk_bdev_error.a 00:13:36.422 LIB libspdk_bdev_delay.a 00:13:36.422 CC module/bdev/null/bdev_null.o 00:13:36.422 SO libspdk_bdev_error.so.6.0 00:13:36.422 SO libspdk_bdev_delay.so.6.0 00:13:36.422 CC module/bdev/nvme/bdev_nvme.o 00:13:36.422 SYMLINK libspdk_bdev_error.so 00:13:36.422 SYMLINK libspdk_bdev_delay.so 00:13:36.422 LIB libspdk_bdev_gpt.a 00:13:36.422 CC module/bdev/nvme/bdev_nvme_rpc.o 00:13:36.422 CC module/bdev/passthru/vbdev_passthru.o 00:13:36.422 SO libspdk_bdev_gpt.so.6.0 00:13:36.422 CC module/bdev/raid/bdev_raid.o 00:13:36.422 SYMLINK libspdk_bdev_gpt.so 00:13:36.681 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:13:36.681 CC module/bdev/split/vbdev_split.o 00:13:36.681 LIB libspdk_bdev_malloc.a 00:13:36.681 LIB libspdk_bdev_lvol.a 00:13:36.681 CC module/bdev/zone_block/vbdev_zone_block.o 00:13:36.681 SO libspdk_bdev_malloc.so.6.0 00:13:36.681 SO libspdk_bdev_lvol.so.6.0 00:13:36.681 SYMLINK libspdk_bdev_malloc.so 00:13:36.681 CC module/bdev/null/bdev_null_rpc.o 00:13:36.681 CC module/bdev/nvme/nvme_rpc.o 00:13:36.681 SYMLINK libspdk_bdev_lvol.so 00:13:36.681 CC module/bdev/nvme/bdev_mdns_client.o 00:13:36.681 CC module/bdev/nvme/vbdev_opal.o 00:13:36.939 LIB libspdk_bdev_passthru.a 00:13:36.940 SO libspdk_bdev_passthru.so.6.0 00:13:36.940 CC module/bdev/split/vbdev_split_rpc.o 00:13:36.940 LIB libspdk_bdev_null.a 00:13:36.940 SYMLINK libspdk_bdev_passthru.so 00:13:36.940 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:13:36.940 SO libspdk_bdev_null.so.6.0 00:13:36.940 SYMLINK libspdk_bdev_null.so 00:13:36.940 CC module/bdev/nvme/vbdev_opal_rpc.o 00:13:36.940 CC module/bdev/raid/bdev_raid_rpc.o 00:13:36.940 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:13:36.940 CC module/bdev/raid/bdev_raid_sb.o 00:13:36.940 LIB libspdk_bdev_split.a 00:13:37.198 CC module/bdev/xnvme/bdev_xnvme.o 00:13:37.198 SO libspdk_bdev_split.so.6.0 00:13:37.198 LIB libspdk_bdev_zone_block.a 00:13:37.198 SO libspdk_bdev_zone_block.so.6.0 00:13:37.198 SYMLINK libspdk_bdev_split.so 00:13:37.198 SYMLINK libspdk_bdev_zone_block.so 00:13:37.198 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:13:37.198 CC module/bdev/raid/raid0.o 00:13:37.457 CC module/bdev/raid/raid1.o 00:13:37.457 CC module/bdev/aio/bdev_aio.o 00:13:37.457 CC module/bdev/aio/bdev_aio_rpc.o 00:13:37.457 CC module/bdev/ftl/bdev_ftl.o 00:13:37.457 LIB libspdk_bdev_xnvme.a 00:13:37.457 CC module/bdev/virtio/bdev_virtio_scsi.o 00:13:37.457 SO libspdk_bdev_xnvme.so.3.0 00:13:37.457 CC module/bdev/iscsi/bdev_iscsi.o 00:13:37.457 SYMLINK libspdk_bdev_xnvme.so 00:13:37.457 CC module/bdev/virtio/bdev_virtio_blk.o 00:13:37.715 CC module/bdev/virtio/bdev_virtio_rpc.o 00:13:37.715 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:13:37.715 CC module/bdev/ftl/bdev_ftl_rpc.o 00:13:37.715 CC module/bdev/raid/concat.o 00:13:37.715 LIB libspdk_bdev_aio.a 00:13:37.974 SO libspdk_bdev_aio.so.6.0 00:13:37.974 LIB libspdk_bdev_iscsi.a 00:13:37.974 LIB libspdk_bdev_ftl.a 00:13:37.974 SYMLINK libspdk_bdev_aio.so 00:13:37.974 SO libspdk_bdev_iscsi.so.6.0 00:13:37.974 SO libspdk_bdev_ftl.so.6.0 00:13:37.974 SYMLINK libspdk_bdev_iscsi.so 00:13:37.974 SYMLINK libspdk_bdev_ftl.so 00:13:37.974 LIB libspdk_bdev_raid.a 00:13:38.233 SO libspdk_bdev_raid.so.6.0 00:13:38.233 LIB libspdk_bdev_virtio.a 00:13:38.233 SO libspdk_bdev_virtio.so.6.0 00:13:38.233 SYMLINK libspdk_bdev_raid.so 00:13:38.233 SYMLINK libspdk_bdev_virtio.so 00:13:40.197 LIB libspdk_bdev_nvme.a 00:13:40.197 SO libspdk_bdev_nvme.so.7.1 00:13:40.197 SYMLINK libspdk_bdev_nvme.so 00:13:40.763 CC module/event/subsystems/keyring/keyring.o 00:13:40.763 CC module/event/subsystems/scheduler/scheduler.o 00:13:40.763 CC module/event/subsystems/vmd/vmd.o 00:13:40.763 CC module/event/subsystems/vmd/vmd_rpc.o 00:13:40.763 CC module/event/subsystems/iobuf/iobuf.o 00:13:40.763 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:13:40.763 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:13:40.763 CC module/event/subsystems/fsdev/fsdev.o 00:13:40.763 CC module/event/subsystems/sock/sock.o 00:13:41.021 LIB libspdk_event_keyring.a 00:13:41.021 LIB libspdk_event_iobuf.a 00:13:41.021 LIB libspdk_event_fsdev.a 00:13:41.021 LIB libspdk_event_scheduler.a 00:13:41.021 LIB libspdk_event_vhost_blk.a 00:13:41.021 LIB libspdk_event_vmd.a 00:13:41.021 LIB libspdk_event_sock.a 00:13:41.021 SO libspdk_event_keyring.so.1.0 00:13:41.021 SO libspdk_event_scheduler.so.4.0 00:13:41.021 SO libspdk_event_vhost_blk.so.3.0 00:13:41.021 SO libspdk_event_fsdev.so.1.0 00:13:41.021 SO libspdk_event_iobuf.so.3.0 00:13:41.021 SO libspdk_event_vmd.so.6.0 00:13:41.021 SO libspdk_event_sock.so.5.0 00:13:41.021 SYMLINK libspdk_event_keyring.so 00:13:41.021 SYMLINK libspdk_event_scheduler.so 00:13:41.021 SYMLINK libspdk_event_vhost_blk.so 00:13:41.021 SYMLINK libspdk_event_fsdev.so 00:13:41.021 SYMLINK libspdk_event_vmd.so 00:13:41.021 SYMLINK libspdk_event_sock.so 00:13:41.021 SYMLINK libspdk_event_iobuf.so 00:13:41.279 CC module/event/subsystems/accel/accel.o 00:13:41.538 LIB libspdk_event_accel.a 00:13:41.538 SO libspdk_event_accel.so.6.0 00:13:41.538 SYMLINK libspdk_event_accel.so 00:13:41.797 CC module/event/subsystems/bdev/bdev.o 00:13:42.056 LIB libspdk_event_bdev.a 00:13:42.056 SO libspdk_event_bdev.so.6.0 00:13:42.314 SYMLINK libspdk_event_bdev.so 00:13:42.314 CC module/event/subsystems/nbd/nbd.o 00:13:42.314 CC module/event/subsystems/scsi/scsi.o 00:13:42.572 CC module/event/subsystems/ublk/ublk.o 00:13:42.572 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:13:42.572 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:13:42.572 LIB libspdk_event_scsi.a 00:13:42.572 LIB libspdk_event_ublk.a 00:13:42.572 LIB libspdk_event_nbd.a 00:13:42.572 SO libspdk_event_scsi.so.6.0 00:13:42.572 SO libspdk_event_ublk.so.3.0 00:13:42.572 SO libspdk_event_nbd.so.6.0 00:13:42.830 SYMLINK libspdk_event_scsi.so 00:13:42.830 SYMLINK libspdk_event_ublk.so 00:13:42.830 SYMLINK libspdk_event_nbd.so 00:13:42.830 LIB libspdk_event_nvmf.a 00:13:42.830 SO libspdk_event_nvmf.so.6.0 00:13:42.830 SYMLINK libspdk_event_nvmf.so 00:13:42.830 CC module/event/subsystems/iscsi/iscsi.o 00:13:42.830 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:13:43.089 LIB libspdk_event_vhost_scsi.a 00:13:43.089 LIB libspdk_event_iscsi.a 00:13:43.089 SO libspdk_event_vhost_scsi.so.3.0 00:13:43.089 SO libspdk_event_iscsi.so.6.0 00:13:43.347 SYMLINK libspdk_event_vhost_scsi.so 00:13:43.347 SYMLINK libspdk_event_iscsi.so 00:13:43.347 SO libspdk.so.6.0 00:13:43.347 SYMLINK libspdk.so 00:13:43.616 CXX app/trace/trace.o 00:13:43.616 CC test/rpc_client/rpc_client_test.o 00:13:43.617 TEST_HEADER include/spdk/accel.h 00:13:43.617 TEST_HEADER include/spdk/accel_module.h 00:13:43.617 TEST_HEADER include/spdk/assert.h 00:13:43.617 TEST_HEADER include/spdk/barrier.h 00:13:43.617 TEST_HEADER include/spdk/base64.h 00:13:43.617 TEST_HEADER include/spdk/bdev.h 00:13:43.617 TEST_HEADER include/spdk/bdev_module.h 00:13:43.617 TEST_HEADER include/spdk/bdev_zone.h 00:13:43.617 TEST_HEADER include/spdk/bit_array.h 00:13:43.617 TEST_HEADER include/spdk/bit_pool.h 00:13:43.617 TEST_HEADER include/spdk/blob_bdev.h 00:13:43.617 TEST_HEADER include/spdk/blobfs_bdev.h 00:13:43.617 TEST_HEADER include/spdk/blobfs.h 00:13:43.617 TEST_HEADER include/spdk/blob.h 00:13:43.617 TEST_HEADER include/spdk/conf.h 00:13:43.617 CC examples/interrupt_tgt/interrupt_tgt.o 00:13:43.617 TEST_HEADER include/spdk/config.h 00:13:43.617 TEST_HEADER include/spdk/cpuset.h 00:13:43.617 TEST_HEADER include/spdk/crc16.h 00:13:43.617 TEST_HEADER include/spdk/crc32.h 00:13:43.617 TEST_HEADER include/spdk/crc64.h 00:13:43.617 TEST_HEADER include/spdk/dif.h 00:13:43.617 TEST_HEADER include/spdk/dma.h 00:13:43.617 TEST_HEADER include/spdk/endian.h 00:13:43.617 TEST_HEADER include/spdk/env_dpdk.h 00:13:43.879 TEST_HEADER include/spdk/env.h 00:13:43.879 TEST_HEADER include/spdk/event.h 00:13:43.879 TEST_HEADER include/spdk/fd_group.h 00:13:43.879 TEST_HEADER include/spdk/fd.h 00:13:43.879 TEST_HEADER include/spdk/file.h 00:13:43.879 TEST_HEADER include/spdk/fsdev.h 00:13:43.879 TEST_HEADER include/spdk/fsdev_module.h 00:13:43.879 TEST_HEADER include/spdk/ftl.h 00:13:43.879 TEST_HEADER include/spdk/fuse_dispatcher.h 00:13:43.879 CC examples/util/zipf/zipf.o 00:13:43.879 TEST_HEADER include/spdk/gpt_spec.h 00:13:43.879 TEST_HEADER include/spdk/hexlify.h 00:13:43.879 CC test/thread/poller_perf/poller_perf.o 00:13:43.879 TEST_HEADER include/spdk/histogram_data.h 00:13:43.879 TEST_HEADER include/spdk/idxd.h 00:13:43.879 TEST_HEADER include/spdk/idxd_spec.h 00:13:43.879 TEST_HEADER include/spdk/init.h 00:13:43.879 CC examples/ioat/perf/perf.o 00:13:43.879 TEST_HEADER include/spdk/ioat.h 00:13:43.879 TEST_HEADER include/spdk/ioat_spec.h 00:13:43.879 TEST_HEADER include/spdk/iscsi_spec.h 00:13:43.879 TEST_HEADER include/spdk/json.h 00:13:43.879 TEST_HEADER include/spdk/jsonrpc.h 00:13:43.879 TEST_HEADER include/spdk/keyring.h 00:13:43.879 TEST_HEADER include/spdk/keyring_module.h 00:13:43.879 TEST_HEADER include/spdk/likely.h 00:13:43.879 TEST_HEADER include/spdk/log.h 00:13:43.879 TEST_HEADER include/spdk/lvol.h 00:13:43.879 TEST_HEADER include/spdk/md5.h 00:13:43.879 TEST_HEADER include/spdk/memory.h 00:13:43.879 TEST_HEADER include/spdk/mmio.h 00:13:43.880 TEST_HEADER include/spdk/nbd.h 00:13:43.880 CC test/app/bdev_svc/bdev_svc.o 00:13:43.880 TEST_HEADER include/spdk/net.h 00:13:43.880 CC test/dma/test_dma/test_dma.o 00:13:43.880 TEST_HEADER include/spdk/notify.h 00:13:43.880 TEST_HEADER include/spdk/nvme.h 00:13:43.880 TEST_HEADER include/spdk/nvme_intel.h 00:13:43.880 TEST_HEADER include/spdk/nvme_ocssd.h 00:13:43.880 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:13:43.880 TEST_HEADER include/spdk/nvme_spec.h 00:13:43.880 TEST_HEADER include/spdk/nvme_zns.h 00:13:43.880 TEST_HEADER include/spdk/nvmf_cmd.h 00:13:43.880 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:13:43.880 TEST_HEADER include/spdk/nvmf.h 00:13:43.880 TEST_HEADER include/spdk/nvmf_spec.h 00:13:43.880 TEST_HEADER include/spdk/nvmf_transport.h 00:13:43.880 TEST_HEADER include/spdk/opal.h 00:13:43.880 TEST_HEADER include/spdk/opal_spec.h 00:13:43.880 TEST_HEADER include/spdk/pci_ids.h 00:13:43.880 TEST_HEADER include/spdk/pipe.h 00:13:43.880 TEST_HEADER include/spdk/queue.h 00:13:43.880 TEST_HEADER include/spdk/reduce.h 00:13:43.880 TEST_HEADER include/spdk/rpc.h 00:13:43.880 TEST_HEADER include/spdk/scheduler.h 00:13:43.880 TEST_HEADER include/spdk/scsi.h 00:13:43.880 TEST_HEADER include/spdk/scsi_spec.h 00:13:43.880 TEST_HEADER include/spdk/sock.h 00:13:43.880 TEST_HEADER include/spdk/stdinc.h 00:13:43.880 TEST_HEADER include/spdk/string.h 00:13:43.880 TEST_HEADER include/spdk/thread.h 00:13:43.880 TEST_HEADER include/spdk/trace.h 00:13:43.880 TEST_HEADER include/spdk/trace_parser.h 00:13:43.880 TEST_HEADER include/spdk/tree.h 00:13:43.880 TEST_HEADER include/spdk/ublk.h 00:13:43.880 TEST_HEADER include/spdk/util.h 00:13:43.880 TEST_HEADER include/spdk/uuid.h 00:13:43.880 CC test/env/mem_callbacks/mem_callbacks.o 00:13:43.880 TEST_HEADER include/spdk/version.h 00:13:43.880 TEST_HEADER include/spdk/vfio_user_pci.h 00:13:43.880 TEST_HEADER include/spdk/vfio_user_spec.h 00:13:43.880 TEST_HEADER include/spdk/vhost.h 00:13:43.880 TEST_HEADER include/spdk/vmd.h 00:13:43.880 TEST_HEADER include/spdk/xor.h 00:13:43.880 LINK rpc_client_test 00:13:43.880 TEST_HEADER include/spdk/zipf.h 00:13:43.880 CXX test/cpp_headers/accel.o 00:13:43.880 LINK poller_perf 00:13:43.880 LINK zipf 00:13:43.880 LINK interrupt_tgt 00:13:44.138 LINK ioat_perf 00:13:44.138 LINK bdev_svc 00:13:44.138 CXX test/cpp_headers/accel_module.o 00:13:44.138 CXX test/cpp_headers/assert.o 00:13:44.138 LINK spdk_trace 00:13:44.398 CC test/app/histogram_perf/histogram_perf.o 00:13:44.398 CXX test/cpp_headers/barrier.o 00:13:44.398 CC examples/ioat/verify/verify.o 00:13:44.398 CC test/event/event_perf/event_perf.o 00:13:44.398 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:13:44.398 CC test/app/jsoncat/jsoncat.o 00:13:44.398 CC test/app/stub/stub.o 00:13:44.398 CXX test/cpp_headers/base64.o 00:13:44.398 CC app/trace_record/trace_record.o 00:13:44.398 LINK histogram_perf 00:13:44.398 LINK test_dma 00:13:44.655 LINK event_perf 00:13:44.655 LINK mem_callbacks 00:13:44.655 LINK jsoncat 00:13:44.655 CXX test/cpp_headers/bdev.o 00:13:44.655 LINK verify 00:13:44.655 CXX test/cpp_headers/bdev_module.o 00:13:44.655 LINK stub 00:13:44.655 CXX test/cpp_headers/bdev_zone.o 00:13:44.655 CC test/event/reactor/reactor.o 00:13:44.913 LINK spdk_trace_record 00:13:44.913 CC test/env/vtophys/vtophys.o 00:13:44.913 LINK reactor 00:13:44.913 LINK nvme_fuzz 00:13:44.913 CXX test/cpp_headers/bit_array.o 00:13:44.913 LINK vtophys 00:13:44.913 CC test/accel/dif/dif.o 00:13:45.172 CC examples/thread/thread/thread_ex.o 00:13:45.172 CC app/nvmf_tgt/nvmf_main.o 00:13:45.172 CC test/nvme/aer/aer.o 00:13:45.172 CC test/blobfs/mkfs/mkfs.o 00:13:45.172 CXX test/cpp_headers/bit_pool.o 00:13:45.172 CC test/event/reactor_perf/reactor_perf.o 00:13:45.172 CC test/lvol/esnap/esnap.o 00:13:45.172 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:13:45.172 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:13:45.429 LINK mkfs 00:13:45.429 LINK thread 00:13:45.429 LINK reactor_perf 00:13:45.430 LINK nvmf_tgt 00:13:45.430 CXX test/cpp_headers/blob_bdev.o 00:13:45.430 LINK env_dpdk_post_init 00:13:45.430 LINK aer 00:13:45.688 CC test/event/app_repeat/app_repeat.o 00:13:45.688 CXX test/cpp_headers/blobfs_bdev.o 00:13:45.688 CC test/event/scheduler/scheduler.o 00:13:45.688 CC test/env/memory/memory_ut.o 00:13:45.688 CC app/iscsi_tgt/iscsi_tgt.o 00:13:45.946 CC test/nvme/reset/reset.o 00:13:45.946 CC examples/sock/hello_world/hello_sock.o 00:13:45.946 LINK app_repeat 00:13:45.946 CXX test/cpp_headers/blobfs.o 00:13:45.946 LINK dif 00:13:45.946 LINK scheduler 00:13:45.946 CXX test/cpp_headers/blob.o 00:13:45.946 LINK iscsi_tgt 00:13:46.246 LINK reset 00:13:46.246 CC test/nvme/sgl/sgl.o 00:13:46.246 CXX test/cpp_headers/conf.o 00:13:46.246 LINK hello_sock 00:13:46.246 CXX test/cpp_headers/config.o 00:13:46.246 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:13:46.505 CXX test/cpp_headers/cpuset.o 00:13:46.505 CC test/env/pci/pci_ut.o 00:13:46.505 CC test/bdev/bdevio/bdevio.o 00:13:46.505 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:13:46.505 LINK sgl 00:13:46.505 CC app/spdk_tgt/spdk_tgt.o 00:13:46.505 CC examples/vmd/lsvmd/lsvmd.o 00:13:46.505 CXX test/cpp_headers/crc16.o 00:13:46.763 LINK lsvmd 00:13:46.763 LINK spdk_tgt 00:13:46.763 CXX test/cpp_headers/crc32.o 00:13:46.763 CC test/nvme/e2edp/nvme_dp.o 00:13:47.021 LINK bdevio 00:13:47.021 CXX test/cpp_headers/crc64.o 00:13:47.021 LINK pci_ut 00:13:47.021 LINK vhost_fuzz 00:13:47.021 CC examples/vmd/led/led.o 00:13:47.021 CC app/spdk_lspci/spdk_lspci.o 00:13:47.279 LINK nvme_dp 00:13:47.279 CXX test/cpp_headers/dif.o 00:13:47.279 LINK led 00:13:47.279 CC test/nvme/overhead/overhead.o 00:13:47.279 LINK spdk_lspci 00:13:47.279 LINK memory_ut 00:13:47.279 CC test/nvme/err_injection/err_injection.o 00:13:47.279 CXX test/cpp_headers/dma.o 00:13:47.279 CC test/nvme/startup/startup.o 00:13:47.537 CC app/spdk_nvme_perf/perf.o 00:13:47.537 CXX test/cpp_headers/endian.o 00:13:47.537 CC test/nvme/reserve/reserve.o 00:13:47.537 LINK err_injection 00:13:47.795 LINK startup 00:13:47.795 CC app/spdk_nvme_identify/identify.o 00:13:47.795 CC examples/idxd/perf/perf.o 00:13:47.795 LINK overhead 00:13:47.795 LINK iscsi_fuzz 00:13:47.795 CXX test/cpp_headers/env_dpdk.o 00:13:47.795 CC test/nvme/simple_copy/simple_copy.o 00:13:47.795 LINK reserve 00:13:48.053 CC app/spdk_nvme_discover/discovery_aer.o 00:13:48.053 CC app/spdk_top/spdk_top.o 00:13:48.053 CXX test/cpp_headers/env.o 00:13:48.053 CXX test/cpp_headers/event.o 00:13:48.053 LINK idxd_perf 00:13:48.053 CC app/vhost/vhost.o 00:13:48.311 LINK spdk_nvme_discover 00:13:48.311 LINK simple_copy 00:13:48.311 CXX test/cpp_headers/fd_group.o 00:13:48.311 CC test/nvme/connect_stress/connect_stress.o 00:13:48.311 LINK vhost 00:13:48.311 CXX test/cpp_headers/fd.o 00:13:48.569 CXX test/cpp_headers/file.o 00:13:48.569 CC examples/fsdev/hello_world/hello_fsdev.o 00:13:48.569 LINK connect_stress 00:13:48.569 CC test/nvme/boot_partition/boot_partition.o 00:13:48.569 LINK spdk_nvme_perf 00:13:48.569 CC test/nvme/compliance/nvme_compliance.o 00:13:48.569 CXX test/cpp_headers/fsdev.o 00:13:48.827 CC test/nvme/fused_ordering/fused_ordering.o 00:13:48.827 LINK boot_partition 00:13:48.827 LINK hello_fsdev 00:13:48.827 LINK spdk_nvme_identify 00:13:48.827 CC test/nvme/doorbell_aers/doorbell_aers.o 00:13:48.827 CXX test/cpp_headers/fsdev_module.o 00:13:48.827 CC test/nvme/fdp/fdp.o 00:13:48.827 LINK fused_ordering 00:13:49.086 CXX test/cpp_headers/ftl.o 00:13:49.086 CC test/nvme/cuse/cuse.o 00:13:49.086 LINK doorbell_aers 00:13:49.086 LINK nvme_compliance 00:13:49.086 CC app/spdk_dd/spdk_dd.o 00:13:49.086 LINK spdk_top 00:13:49.086 CC examples/accel/perf/accel_perf.o 00:13:49.345 CXX test/cpp_headers/fuse_dispatcher.o 00:13:49.345 LINK fdp 00:13:49.345 CC examples/blob/hello_world/hello_blob.o 00:13:49.345 CC examples/blob/cli/blobcli.o 00:13:49.345 CC examples/nvme/hello_world/hello_world.o 00:13:49.345 CXX test/cpp_headers/gpt_spec.o 00:13:49.603 CC app/fio/nvme/fio_plugin.o 00:13:49.603 CC examples/nvme/reconnect/reconnect.o 00:13:49.603 LINK spdk_dd 00:13:49.603 CXX test/cpp_headers/hexlify.o 00:13:49.603 LINK hello_blob 00:13:49.603 LINK hello_world 00:13:49.861 CXX test/cpp_headers/histogram_data.o 00:13:49.861 LINK accel_perf 00:13:49.861 CC examples/nvme/nvme_manage/nvme_manage.o 00:13:49.861 CXX test/cpp_headers/idxd.o 00:13:49.861 CC examples/nvme/arbitration/arbitration.o 00:13:49.861 LINK reconnect 00:13:49.861 LINK blobcli 00:13:50.119 CXX test/cpp_headers/idxd_spec.o 00:13:50.119 CC examples/nvme/hotplug/hotplug.o 00:13:50.120 CC app/fio/bdev/fio_plugin.o 00:13:50.120 LINK spdk_nvme 00:13:50.120 CC examples/nvme/cmb_copy/cmb_copy.o 00:13:50.378 CXX test/cpp_headers/init.o 00:13:50.378 CC examples/nvme/abort/abort.o 00:13:50.378 LINK arbitration 00:13:50.378 LINK hotplug 00:13:50.378 LINK cmb_copy 00:13:50.378 CXX test/cpp_headers/ioat.o 00:13:50.378 LINK nvme_manage 00:13:50.378 CC examples/bdev/hello_world/hello_bdev.o 00:13:50.636 CXX test/cpp_headers/ioat_spec.o 00:13:50.636 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:13:50.636 CXX test/cpp_headers/iscsi_spec.o 00:13:50.636 LINK cuse 00:13:50.636 CXX test/cpp_headers/json.o 00:13:50.636 LINK spdk_bdev 00:13:50.636 CXX test/cpp_headers/jsonrpc.o 00:13:50.895 LINK abort 00:13:50.895 LINK hello_bdev 00:13:50.895 CXX test/cpp_headers/keyring.o 00:13:50.895 CC examples/bdev/bdevperf/bdevperf.o 00:13:50.895 LINK pmr_persistence 00:13:50.895 CXX test/cpp_headers/keyring_module.o 00:13:50.895 CXX test/cpp_headers/likely.o 00:13:50.895 CXX test/cpp_headers/log.o 00:13:50.895 CXX test/cpp_headers/lvol.o 00:13:50.895 CXX test/cpp_headers/md5.o 00:13:50.895 CXX test/cpp_headers/memory.o 00:13:50.895 CXX test/cpp_headers/mmio.o 00:13:50.895 CXX test/cpp_headers/nbd.o 00:13:51.154 CXX test/cpp_headers/net.o 00:13:51.154 CXX test/cpp_headers/notify.o 00:13:51.154 CXX test/cpp_headers/nvme.o 00:13:51.154 CXX test/cpp_headers/nvme_intel.o 00:13:51.154 CXX test/cpp_headers/nvme_ocssd.o 00:13:51.154 CXX test/cpp_headers/nvme_ocssd_spec.o 00:13:51.154 CXX test/cpp_headers/nvme_spec.o 00:13:51.154 CXX test/cpp_headers/nvme_zns.o 00:13:51.154 CXX test/cpp_headers/nvmf_cmd.o 00:13:51.154 CXX test/cpp_headers/nvmf_fc_spec.o 00:13:51.154 CXX test/cpp_headers/nvmf.o 00:13:51.413 CXX test/cpp_headers/nvmf_spec.o 00:13:51.413 CXX test/cpp_headers/nvmf_transport.o 00:13:51.413 CXX test/cpp_headers/opal.o 00:13:51.413 CXX test/cpp_headers/opal_spec.o 00:13:51.413 CXX test/cpp_headers/pipe.o 00:13:51.413 CXX test/cpp_headers/pci_ids.o 00:13:51.413 CXX test/cpp_headers/queue.o 00:13:51.413 CXX test/cpp_headers/reduce.o 00:13:51.413 CXX test/cpp_headers/rpc.o 00:13:51.413 CXX test/cpp_headers/scheduler.o 00:13:51.672 CXX test/cpp_headers/scsi.o 00:13:51.672 CXX test/cpp_headers/scsi_spec.o 00:13:51.672 CXX test/cpp_headers/sock.o 00:13:51.672 CXX test/cpp_headers/stdinc.o 00:13:51.672 CXX test/cpp_headers/thread.o 00:13:51.672 CXX test/cpp_headers/string.o 00:13:51.672 CXX test/cpp_headers/trace.o 00:13:51.672 CXX test/cpp_headers/trace_parser.o 00:13:51.672 CXX test/cpp_headers/tree.o 00:13:51.965 CXX test/cpp_headers/ublk.o 00:13:51.965 CXX test/cpp_headers/util.o 00:13:51.965 CXX test/cpp_headers/uuid.o 00:13:51.965 CXX test/cpp_headers/version.o 00:13:51.965 CXX test/cpp_headers/vfio_user_pci.o 00:13:51.965 CXX test/cpp_headers/vfio_user_spec.o 00:13:51.965 CXX test/cpp_headers/vhost.o 00:13:51.965 LINK bdevperf 00:13:51.965 CXX test/cpp_headers/vmd.o 00:13:51.965 CXX test/cpp_headers/xor.o 00:13:51.965 CXX test/cpp_headers/zipf.o 00:13:52.554 CC examples/nvmf/nvmf/nvmf.o 00:13:52.554 LINK esnap 00:13:52.813 LINK nvmf 00:13:53.071 00:13:53.071 real 1m40.148s 00:13:53.071 user 9m33.663s 00:13:53.071 sys 1m48.004s 00:13:53.071 11:27:58 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:13:53.071 11:27:58 make -- common/autotest_common.sh@10 -- $ set +x 00:13:53.071 ************************************ 00:13:53.071 END TEST make 00:13:53.071 ************************************ 00:13:53.071 11:27:58 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:13:53.071 11:27:58 -- pm/common@29 -- $ signal_monitor_resources TERM 00:13:53.071 11:27:58 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:13:53.071 11:27:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:13:53.071 11:27:58 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:13:53.071 11:27:58 -- pm/common@44 -- $ pid=5445 00:13:53.071 11:27:58 -- pm/common@50 -- $ kill -TERM 5445 00:13:53.071 11:27:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:13:53.071 11:27:58 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:13:53.071 11:27:58 -- pm/common@44 -- $ pid=5447 00:13:53.071 11:27:58 -- pm/common@50 -- $ kill -TERM 5447 00:13:53.071 11:27:58 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:13:53.071 11:27:58 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:13:53.071 11:27:58 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:53.071 11:27:58 -- common/autotest_common.sh@1693 -- # lcov --version 00:13:53.071 11:27:58 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:53.331 11:27:58 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:53.331 11:27:58 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:53.331 11:27:58 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:53.331 11:27:58 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:53.331 11:27:58 -- scripts/common.sh@336 -- # IFS=.-: 00:13:53.331 11:27:58 -- scripts/common.sh@336 -- # read -ra ver1 00:13:53.331 11:27:58 -- scripts/common.sh@337 -- # IFS=.-: 00:13:53.331 11:27:58 -- scripts/common.sh@337 -- # read -ra ver2 00:13:53.331 11:27:58 -- scripts/common.sh@338 -- # local 'op=<' 00:13:53.331 11:27:58 -- scripts/common.sh@340 -- # ver1_l=2 00:13:53.331 11:27:58 -- scripts/common.sh@341 -- # ver2_l=1 00:13:53.331 11:27:58 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:53.331 11:27:58 -- scripts/common.sh@344 -- # case "$op" in 00:13:53.331 11:27:58 -- scripts/common.sh@345 -- # : 1 00:13:53.331 11:27:58 -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:53.331 11:27:58 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:53.331 11:27:58 -- scripts/common.sh@365 -- # decimal 1 00:13:53.331 11:27:58 -- scripts/common.sh@353 -- # local d=1 00:13:53.331 11:27:58 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:53.331 11:27:58 -- scripts/common.sh@355 -- # echo 1 00:13:53.331 11:27:58 -- scripts/common.sh@365 -- # ver1[v]=1 00:13:53.331 11:27:58 -- scripts/common.sh@366 -- # decimal 2 00:13:53.331 11:27:58 -- scripts/common.sh@353 -- # local d=2 00:13:53.331 11:27:58 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:53.331 11:27:58 -- scripts/common.sh@355 -- # echo 2 00:13:53.331 11:27:58 -- scripts/common.sh@366 -- # ver2[v]=2 00:13:53.331 11:27:58 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:53.331 11:27:58 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:53.331 11:27:58 -- scripts/common.sh@368 -- # return 0 00:13:53.331 11:27:58 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:53.331 11:27:58 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:53.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.331 --rc genhtml_branch_coverage=1 00:13:53.331 --rc genhtml_function_coverage=1 00:13:53.331 --rc genhtml_legend=1 00:13:53.331 --rc geninfo_all_blocks=1 00:13:53.331 --rc geninfo_unexecuted_blocks=1 00:13:53.331 00:13:53.331 ' 00:13:53.331 11:27:58 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:53.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.331 --rc genhtml_branch_coverage=1 00:13:53.331 --rc genhtml_function_coverage=1 00:13:53.331 --rc genhtml_legend=1 00:13:53.331 --rc geninfo_all_blocks=1 00:13:53.331 --rc geninfo_unexecuted_blocks=1 00:13:53.331 00:13:53.331 ' 00:13:53.331 11:27:58 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:53.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.331 --rc genhtml_branch_coverage=1 00:13:53.331 --rc genhtml_function_coverage=1 00:13:53.331 --rc genhtml_legend=1 00:13:53.331 --rc geninfo_all_blocks=1 00:13:53.331 --rc geninfo_unexecuted_blocks=1 00:13:53.331 00:13:53.331 ' 00:13:53.331 11:27:58 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:53.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.331 --rc genhtml_branch_coverage=1 00:13:53.331 --rc genhtml_function_coverage=1 00:13:53.331 --rc genhtml_legend=1 00:13:53.331 --rc geninfo_all_blocks=1 00:13:53.331 --rc geninfo_unexecuted_blocks=1 00:13:53.331 00:13:53.331 ' 00:13:53.331 11:27:58 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:53.331 11:27:58 -- nvmf/common.sh@7 -- # uname -s 00:13:53.331 11:27:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:53.331 11:27:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:53.331 11:27:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:53.331 11:27:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:53.331 11:27:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:53.331 11:27:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:53.331 11:27:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:53.331 11:27:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:53.331 11:27:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:53.331 11:27:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:53.331 11:27:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:11e88e29-ee60-469d-aa56-4628a056478e 00:13:53.331 11:27:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=11e88e29-ee60-469d-aa56-4628a056478e 00:13:53.331 11:27:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:53.332 11:27:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:53.332 11:27:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:53.332 11:27:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:53.332 11:27:58 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:53.332 11:27:58 -- scripts/common.sh@15 -- # shopt -s extglob 00:13:53.332 11:27:58 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:53.332 11:27:58 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:53.332 11:27:58 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:53.332 11:27:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.332 11:27:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.332 11:27:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.332 11:27:58 -- paths/export.sh@5 -- # export PATH 00:13:53.332 11:27:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.332 11:27:58 -- nvmf/common.sh@51 -- # : 0 00:13:53.332 11:27:58 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:53.332 11:27:58 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:53.332 11:27:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:53.332 11:27:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:53.332 11:27:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:53.332 11:27:58 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:53.332 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:53.332 11:27:58 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:53.332 11:27:58 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:53.332 11:27:58 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:53.332 11:27:58 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:13:53.332 11:27:58 -- spdk/autotest.sh@32 -- # uname -s 00:13:53.332 11:27:58 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:13:53.332 11:27:58 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:13:53.332 11:27:58 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:13:53.332 11:27:58 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:13:53.332 11:27:58 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:13:53.332 11:27:58 -- spdk/autotest.sh@44 -- # modprobe nbd 00:13:53.332 11:27:58 -- spdk/autotest.sh@46 -- # type -P udevadm 00:13:53.332 11:27:58 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:13:53.332 11:27:58 -- spdk/autotest.sh@48 -- # udevadm_pid=55032 00:13:53.332 11:27:58 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:13:53.332 11:27:58 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:13:53.332 11:27:58 -- pm/common@17 -- # local monitor 00:13:53.332 11:27:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:13:53.332 11:27:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:13:53.332 11:27:58 -- pm/common@25 -- # sleep 1 00:13:53.332 11:27:58 -- pm/common@21 -- # date +%s 00:13:53.332 11:27:58 -- pm/common@21 -- # date +%s 00:13:53.332 11:27:58 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732102078 00:13:53.332 11:27:58 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732102078 00:13:53.332 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732102078_collect-vmstat.pm.log 00:13:53.332 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732102078_collect-cpu-load.pm.log 00:13:54.267 11:27:59 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:13:54.267 11:27:59 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:13:54.267 11:27:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:54.267 11:27:59 -- common/autotest_common.sh@10 -- # set +x 00:13:54.267 11:27:59 -- spdk/autotest.sh@59 -- # create_test_list 00:13:54.267 11:27:59 -- common/autotest_common.sh@752 -- # xtrace_disable 00:13:54.267 11:27:59 -- common/autotest_common.sh@10 -- # set +x 00:13:54.267 11:28:00 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:13:54.525 11:28:00 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:13:54.525 11:28:00 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:13:54.525 11:28:00 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:13:54.525 11:28:00 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:13:54.525 11:28:00 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:13:54.525 11:28:00 -- common/autotest_common.sh@1457 -- # uname 00:13:54.525 11:28:00 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:13:54.525 11:28:00 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:13:54.525 11:28:00 -- common/autotest_common.sh@1477 -- # uname 00:13:54.525 11:28:00 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:13:54.525 11:28:00 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:13:54.525 11:28:00 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:13:54.525 lcov: LCOV version 1.15 00:13:54.525 11:28:00 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:14:12.605 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:14:12.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:14:30.722 11:28:35 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:14:30.722 11:28:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:30.722 11:28:35 -- common/autotest_common.sh@10 -- # set +x 00:14:30.722 11:28:35 -- spdk/autotest.sh@78 -- # rm -f 00:14:30.722 11:28:35 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:30.722 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:30.722 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:14:30.722 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:14:30.722 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:14:30.722 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:14:30.722 11:28:36 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:14:30.722 11:28:36 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:14:30.722 11:28:36 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:14:30.722 11:28:36 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:14:30.722 11:28:36 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:30.722 11:28:36 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:14:30.722 11:28:36 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:14:30.722 11:28:36 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:30.722 11:28:36 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:30.722 11:28:36 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:30.722 11:28:36 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:14:30.722 11:28:36 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:14:30.722 11:28:36 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:30.722 11:28:36 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:30.722 11:28:36 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:30.722 11:28:36 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:14:30.722 11:28:36 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:14:30.722 11:28:36 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:14:30.722 11:28:36 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:30.722 11:28:36 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:30.722 11:28:36 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:14:30.722 11:28:36 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:14:30.722 11:28:36 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:14:30.722 11:28:36 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:30.722 11:28:36 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:30.722 11:28:36 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:14:30.722 11:28:36 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:14:30.722 11:28:36 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:14:30.722 11:28:36 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:30.722 11:28:36 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:30.722 11:28:36 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:14:30.722 11:28:36 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:14:30.722 11:28:36 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:14:30.722 11:28:36 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:30.722 11:28:36 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:30.722 11:28:36 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:14:30.722 11:28:36 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:14:30.722 11:28:36 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:14:30.722 11:28:36 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:30.722 11:28:36 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:14:30.722 11:28:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:14:30.722 11:28:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:14:30.722 11:28:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:14:30.722 11:28:36 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:14:30.722 11:28:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:14:30.722 No valid GPT data, bailing 00:14:30.722 11:28:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:14:30.722 11:28:36 -- scripts/common.sh@394 -- # pt= 00:14:30.722 11:28:36 -- scripts/common.sh@395 -- # return 1 00:14:30.722 11:28:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:14:30.722 1+0 records in 00:14:30.722 1+0 records out 00:14:30.722 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158448 s, 66.2 MB/s 00:14:30.722 11:28:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:14:30.722 11:28:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:14:30.722 11:28:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:14:30.722 11:28:36 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:14:30.722 11:28:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:14:30.722 No valid GPT data, bailing 00:14:30.722 11:28:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:14:30.981 11:28:36 -- scripts/common.sh@394 -- # pt= 00:14:30.981 11:28:36 -- scripts/common.sh@395 -- # return 1 00:14:30.981 11:28:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:14:30.981 1+0 records in 00:14:30.981 1+0 records out 00:14:30.981 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00454412 s, 231 MB/s 00:14:30.981 11:28:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:14:30.981 11:28:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:14:30.981 11:28:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:14:30.981 11:28:36 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:14:30.981 11:28:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:14:30.981 No valid GPT data, bailing 00:14:30.981 11:28:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:14:30.981 11:28:36 -- scripts/common.sh@394 -- # pt= 00:14:30.981 11:28:36 -- scripts/common.sh@395 -- # return 1 00:14:30.981 11:28:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:14:30.981 1+0 records in 00:14:30.981 1+0 records out 00:14:30.981 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00510178 s, 206 MB/s 00:14:30.981 11:28:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:14:30.981 11:28:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:14:30.981 11:28:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:14:30.981 11:28:36 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:14:30.981 11:28:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:14:30.981 No valid GPT data, bailing 00:14:30.981 11:28:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:14:30.981 11:28:36 -- scripts/common.sh@394 -- # pt= 00:14:30.981 11:28:36 -- scripts/common.sh@395 -- # return 1 00:14:30.981 11:28:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:14:30.981 1+0 records in 00:14:30.981 1+0 records out 00:14:30.981 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00496504 s, 211 MB/s 00:14:30.981 11:28:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:14:30.981 11:28:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:14:30.981 11:28:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:14:30.981 11:28:36 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:14:30.981 11:28:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:14:30.981 No valid GPT data, bailing 00:14:30.981 11:28:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:14:30.981 11:28:36 -- scripts/common.sh@394 -- # pt= 00:14:30.981 11:28:36 -- scripts/common.sh@395 -- # return 1 00:14:30.981 11:28:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:14:30.981 1+0 records in 00:14:30.981 1+0 records out 00:14:30.981 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00444782 s, 236 MB/s 00:14:30.981 11:28:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:14:30.981 11:28:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:14:30.981 11:28:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:14:30.982 11:28:36 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:14:30.982 11:28:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:14:31.241 No valid GPT data, bailing 00:14:31.241 11:28:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:14:31.241 11:28:36 -- scripts/common.sh@394 -- # pt= 00:14:31.241 11:28:36 -- scripts/common.sh@395 -- # return 1 00:14:31.241 11:28:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:14:31.241 1+0 records in 00:14:31.241 1+0 records out 00:14:31.241 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0048634 s, 216 MB/s 00:14:31.241 11:28:36 -- spdk/autotest.sh@105 -- # sync 00:14:31.241 11:28:36 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:14:31.241 11:28:36 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:14:31.241 11:28:36 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:14:33.144 11:28:38 -- spdk/autotest.sh@111 -- # uname -s 00:14:33.144 11:28:38 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:14:33.144 11:28:38 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:14:33.144 11:28:38 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:14:33.711 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:34.282 Hugepages 00:14:34.282 node hugesize free / total 00:14:34.282 node0 1048576kB 0 / 0 00:14:34.282 node0 2048kB 0 / 0 00:14:34.282 00:14:34.282 Type BDF Vendor Device NUMA Driver Device Block devices 00:14:34.282 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:14:34.282 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:14:34.282 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:14:34.540 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:14:34.540 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:14:34.540 11:28:40 -- spdk/autotest.sh@117 -- # uname -s 00:14:34.540 11:28:40 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:14:34.540 11:28:40 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:14:34.540 11:28:40 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:35.106 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:36.041 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:36.041 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:36.041 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:36.041 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:36.041 11:28:41 -- common/autotest_common.sh@1517 -- # sleep 1 00:14:36.974 11:28:42 -- common/autotest_common.sh@1518 -- # bdfs=() 00:14:36.974 11:28:42 -- common/autotest_common.sh@1518 -- # local bdfs 00:14:36.974 11:28:42 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:14:36.975 11:28:42 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:14:36.975 11:28:42 -- common/autotest_common.sh@1498 -- # bdfs=() 00:14:36.975 11:28:42 -- common/autotest_common.sh@1498 -- # local bdfs 00:14:36.975 11:28:42 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:36.975 11:28:42 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:36.975 11:28:42 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:14:36.975 11:28:42 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:14:36.975 11:28:42 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:36.975 11:28:42 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:37.542 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:37.542 Waiting for block devices as requested 00:14:37.542 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:37.800 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:37.800 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:37.801 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:43.068 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:43.068 11:28:48 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:14:43.068 11:28:48 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:14:43.068 11:28:48 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:14:43.068 11:28:48 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:14:43.068 11:28:48 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:14:43.068 11:28:48 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:14:43.068 11:28:48 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:14:43.068 11:28:48 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:14:43.068 11:28:48 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:14:43.068 11:28:48 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:14:43.068 11:28:48 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:14:43.068 11:28:48 -- common/autotest_common.sh@1531 -- # grep oacs 00:14:43.068 11:28:48 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:14:43.068 11:28:48 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:14:43.068 11:28:48 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:14:43.068 11:28:48 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:14:43.068 11:28:48 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:14:43.068 11:28:48 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:14:43.068 11:28:48 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:14:43.068 11:28:48 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:14:43.068 11:28:48 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:14:43.068 11:28:48 -- common/autotest_common.sh@1543 -- # continue 00:14:43.068 11:28:48 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:14:43.068 11:28:48 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:14:43.068 11:28:48 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:14:43.068 11:28:48 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:14:43.068 11:28:48 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:14:43.068 11:28:48 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:14:43.068 11:28:48 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:14:43.068 11:28:48 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:14:43.068 11:28:48 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:14:43.068 11:28:48 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:14:43.068 11:28:48 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:14:43.068 11:28:48 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:14:43.068 11:28:48 -- common/autotest_common.sh@1531 -- # grep oacs 00:14:43.068 11:28:48 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:14:43.068 11:28:48 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:14:43.068 11:28:48 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:14:43.068 11:28:48 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:14:43.068 11:28:48 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:14:43.068 11:28:48 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:14:43.068 11:28:48 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:14:43.068 11:28:48 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:14:43.068 11:28:48 -- common/autotest_common.sh@1543 -- # continue 00:14:43.068 11:28:48 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:14:43.068 11:28:48 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:14:43.068 11:28:48 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:14:43.068 11:28:48 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:14:43.068 11:28:48 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:14:43.068 11:28:48 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:14:43.069 11:28:48 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:14:43.069 11:28:48 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:14:43.069 11:28:48 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:14:43.069 11:28:48 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:14:43.069 11:28:48 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:14:43.069 11:28:48 -- common/autotest_common.sh@1531 -- # grep oacs 00:14:43.069 11:28:48 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:14:43.069 11:28:48 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:14:43.069 11:28:48 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:14:43.069 11:28:48 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:14:43.069 11:28:48 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:14:43.069 11:28:48 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:14:43.069 11:28:48 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:14:43.069 11:28:48 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:14:43.069 11:28:48 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:14:43.069 11:28:48 -- common/autotest_common.sh@1543 -- # continue 00:14:43.069 11:28:48 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:14:43.069 11:28:48 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:14:43.069 11:28:48 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:14:43.069 11:28:48 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:14:43.069 11:28:48 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:14:43.069 11:28:48 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:14:43.069 11:28:48 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:14:43.069 11:28:48 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:14:43.069 11:28:48 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:14:43.069 11:28:48 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:14:43.069 11:28:48 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:14:43.069 11:28:48 -- common/autotest_common.sh@1531 -- # grep oacs 00:14:43.069 11:28:48 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:14:43.069 11:28:48 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:14:43.069 11:28:48 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:14:43.069 11:28:48 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:14:43.069 11:28:48 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:14:43.069 11:28:48 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:14:43.069 11:28:48 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:14:43.069 11:28:48 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:14:43.069 11:28:48 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:14:43.069 11:28:48 -- common/autotest_common.sh@1543 -- # continue 00:14:43.069 11:28:48 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:14:43.069 11:28:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:43.069 11:28:48 -- common/autotest_common.sh@10 -- # set +x 00:14:43.069 11:28:48 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:14:43.069 11:28:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:43.069 11:28:48 -- common/autotest_common.sh@10 -- # set +x 00:14:43.069 11:28:48 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:43.714 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:44.281 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:44.281 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:44.281 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:44.281 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:44.541 11:28:50 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:14:44.541 11:28:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:44.541 11:28:50 -- common/autotest_common.sh@10 -- # set +x 00:14:44.541 11:28:50 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:14:44.541 11:28:50 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:14:44.541 11:28:50 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:14:44.541 11:28:50 -- common/autotest_common.sh@1563 -- # bdfs=() 00:14:44.541 11:28:50 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:14:44.541 11:28:50 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:14:44.541 11:28:50 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:14:44.541 11:28:50 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:14:44.541 11:28:50 -- common/autotest_common.sh@1498 -- # bdfs=() 00:14:44.541 11:28:50 -- common/autotest_common.sh@1498 -- # local bdfs 00:14:44.541 11:28:50 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:44.541 11:28:50 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:44.541 11:28:50 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:14:44.541 11:28:50 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:14:44.541 11:28:50 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:44.541 11:28:50 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:14:44.541 11:28:50 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:14:44.541 11:28:50 -- common/autotest_common.sh@1566 -- # device=0x0010 00:14:44.541 11:28:50 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:14:44.541 11:28:50 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:14:44.541 11:28:50 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:14:44.541 11:28:50 -- common/autotest_common.sh@1566 -- # device=0x0010 00:14:44.541 11:28:50 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:14:44.541 11:28:50 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:14:44.541 11:28:50 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:14:44.541 11:28:50 -- common/autotest_common.sh@1566 -- # device=0x0010 00:14:44.541 11:28:50 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:14:44.541 11:28:50 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:14:44.541 11:28:50 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:14:44.541 11:28:50 -- common/autotest_common.sh@1566 -- # device=0x0010 00:14:44.541 11:28:50 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:14:44.541 11:28:50 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:14:44.541 11:28:50 -- common/autotest_common.sh@1572 -- # return 0 00:14:44.541 11:28:50 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:14:44.541 11:28:50 -- common/autotest_common.sh@1580 -- # return 0 00:14:44.541 11:28:50 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:14:44.541 11:28:50 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:14:44.541 11:28:50 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:14:44.541 11:28:50 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:14:44.541 11:28:50 -- spdk/autotest.sh@149 -- # timing_enter lib 00:14:44.541 11:28:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:44.541 11:28:50 -- common/autotest_common.sh@10 -- # set +x 00:14:44.541 11:28:50 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:14:44.541 11:28:50 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:14:44.541 11:28:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:44.541 11:28:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:44.541 11:28:50 -- common/autotest_common.sh@10 -- # set +x 00:14:44.541 ************************************ 00:14:44.541 START TEST env 00:14:44.541 ************************************ 00:14:44.541 11:28:50 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:14:44.801 * Looking for test storage... 00:14:44.801 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:14:44.801 11:28:50 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:44.801 11:28:50 env -- common/autotest_common.sh@1693 -- # lcov --version 00:14:44.801 11:28:50 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:44.801 11:28:50 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:44.801 11:28:50 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:44.801 11:28:50 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:44.801 11:28:50 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:44.801 11:28:50 env -- scripts/common.sh@336 -- # IFS=.-: 00:14:44.801 11:28:50 env -- scripts/common.sh@336 -- # read -ra ver1 00:14:44.801 11:28:50 env -- scripts/common.sh@337 -- # IFS=.-: 00:14:44.801 11:28:50 env -- scripts/common.sh@337 -- # read -ra ver2 00:14:44.801 11:28:50 env -- scripts/common.sh@338 -- # local 'op=<' 00:14:44.801 11:28:50 env -- scripts/common.sh@340 -- # ver1_l=2 00:14:44.801 11:28:50 env -- scripts/common.sh@341 -- # ver2_l=1 00:14:44.801 11:28:50 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:44.801 11:28:50 env -- scripts/common.sh@344 -- # case "$op" in 00:14:44.801 11:28:50 env -- scripts/common.sh@345 -- # : 1 00:14:44.801 11:28:50 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:44.801 11:28:50 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:44.801 11:28:50 env -- scripts/common.sh@365 -- # decimal 1 00:14:44.801 11:28:50 env -- scripts/common.sh@353 -- # local d=1 00:14:44.801 11:28:50 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:44.801 11:28:50 env -- scripts/common.sh@355 -- # echo 1 00:14:44.801 11:28:50 env -- scripts/common.sh@365 -- # ver1[v]=1 00:14:44.801 11:28:50 env -- scripts/common.sh@366 -- # decimal 2 00:14:44.801 11:28:50 env -- scripts/common.sh@353 -- # local d=2 00:14:44.801 11:28:50 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:44.801 11:28:50 env -- scripts/common.sh@355 -- # echo 2 00:14:44.801 11:28:50 env -- scripts/common.sh@366 -- # ver2[v]=2 00:14:44.801 11:28:50 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:44.801 11:28:50 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:44.801 11:28:50 env -- scripts/common.sh@368 -- # return 0 00:14:44.801 11:28:50 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:44.801 11:28:50 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:44.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.801 --rc genhtml_branch_coverage=1 00:14:44.801 --rc genhtml_function_coverage=1 00:14:44.801 --rc genhtml_legend=1 00:14:44.801 --rc geninfo_all_blocks=1 00:14:44.801 --rc geninfo_unexecuted_blocks=1 00:14:44.801 00:14:44.801 ' 00:14:44.801 11:28:50 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:44.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.801 --rc genhtml_branch_coverage=1 00:14:44.801 --rc genhtml_function_coverage=1 00:14:44.801 --rc genhtml_legend=1 00:14:44.801 --rc geninfo_all_blocks=1 00:14:44.801 --rc geninfo_unexecuted_blocks=1 00:14:44.801 00:14:44.801 ' 00:14:44.801 11:28:50 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:44.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.801 --rc genhtml_branch_coverage=1 00:14:44.801 --rc genhtml_function_coverage=1 00:14:44.801 --rc genhtml_legend=1 00:14:44.801 --rc geninfo_all_blocks=1 00:14:44.801 --rc geninfo_unexecuted_blocks=1 00:14:44.801 00:14:44.801 ' 00:14:44.801 11:28:50 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:44.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.801 --rc genhtml_branch_coverage=1 00:14:44.801 --rc genhtml_function_coverage=1 00:14:44.801 --rc genhtml_legend=1 00:14:44.801 --rc geninfo_all_blocks=1 00:14:44.801 --rc geninfo_unexecuted_blocks=1 00:14:44.801 00:14:44.801 ' 00:14:44.801 11:28:50 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:14:44.801 11:28:50 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:44.801 11:28:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:44.801 11:28:50 env -- common/autotest_common.sh@10 -- # set +x 00:14:44.801 ************************************ 00:14:44.801 START TEST env_memory 00:14:44.801 ************************************ 00:14:44.801 11:28:50 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:14:44.801 00:14:44.801 00:14:44.801 CUnit - A unit testing framework for C - Version 2.1-3 00:14:44.801 http://cunit.sourceforge.net/ 00:14:44.801 00:14:44.801 00:14:44.801 Suite: memory 00:14:44.801 Test: alloc and free memory map ...[2024-11-20 11:28:50.512346] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:14:44.801 passed 00:14:44.801 Test: mem map translation ...[2024-11-20 11:28:50.556271] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:14:44.801 [2024-11-20 11:28:50.556365] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:14:44.801 [2024-11-20 11:28:50.556466] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:14:44.801 [2024-11-20 11:28:50.556490] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:14:45.061 passed 00:14:45.061 Test: mem map registration ...[2024-11-20 11:28:50.631016] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:14:45.061 [2024-11-20 11:28:50.631113] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:14:45.061 passed 00:14:45.061 Test: mem map adjacent registrations ...passed 00:14:45.061 00:14:45.061 Run Summary: Type Total Ran Passed Failed Inactive 00:14:45.061 suites 1 1 n/a 0 0 00:14:45.061 tests 4 4 4 0 0 00:14:45.061 asserts 152 152 152 0 n/a 00:14:45.061 00:14:45.061 Elapsed time = 0.261 seconds 00:14:45.061 00:14:45.061 real 0m0.303s 00:14:45.061 user 0m0.277s 00:14:45.061 sys 0m0.019s 00:14:45.061 11:28:50 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:45.061 11:28:50 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:14:45.061 ************************************ 00:14:45.061 END TEST env_memory 00:14:45.061 ************************************ 00:14:45.061 11:28:50 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:14:45.061 11:28:50 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:45.061 11:28:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:45.061 11:28:50 env -- common/autotest_common.sh@10 -- # set +x 00:14:45.061 ************************************ 00:14:45.061 START TEST env_vtophys 00:14:45.061 ************************************ 00:14:45.061 11:28:50 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:14:45.320 EAL: lib.eal log level changed from notice to debug 00:14:45.320 EAL: Detected lcore 0 as core 0 on socket 0 00:14:45.320 EAL: Detected lcore 1 as core 0 on socket 0 00:14:45.320 EAL: Detected lcore 2 as core 0 on socket 0 00:14:45.320 EAL: Detected lcore 3 as core 0 on socket 0 00:14:45.320 EAL: Detected lcore 4 as core 0 on socket 0 00:14:45.320 EAL: Detected lcore 5 as core 0 on socket 0 00:14:45.320 EAL: Detected lcore 6 as core 0 on socket 0 00:14:45.320 EAL: Detected lcore 7 as core 0 on socket 0 00:14:45.320 EAL: Detected lcore 8 as core 0 on socket 0 00:14:45.320 EAL: Detected lcore 9 as core 0 on socket 0 00:14:45.320 EAL: Maximum logical cores by configuration: 128 00:14:45.320 EAL: Detected CPU lcores: 10 00:14:45.320 EAL: Detected NUMA nodes: 1 00:14:45.320 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:14:45.320 EAL: Detected shared linkage of DPDK 00:14:45.320 EAL: No shared files mode enabled, IPC will be disabled 00:14:45.320 EAL: Selected IOVA mode 'PA' 00:14:45.320 EAL: Probing VFIO support... 00:14:45.320 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:14:45.320 EAL: VFIO modules not loaded, skipping VFIO support... 00:14:45.320 EAL: Ask a virtual area of 0x2e000 bytes 00:14:45.320 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:14:45.320 EAL: Setting up physically contiguous memory... 00:14:45.320 EAL: Setting maximum number of open files to 524288 00:14:45.320 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:14:45.320 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:14:45.320 EAL: Ask a virtual area of 0x61000 bytes 00:14:45.320 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:14:45.320 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:14:45.320 EAL: Ask a virtual area of 0x400000000 bytes 00:14:45.320 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:14:45.320 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:14:45.320 EAL: Ask a virtual area of 0x61000 bytes 00:14:45.320 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:14:45.320 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:14:45.320 EAL: Ask a virtual area of 0x400000000 bytes 00:14:45.320 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:14:45.320 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:14:45.320 EAL: Ask a virtual area of 0x61000 bytes 00:14:45.320 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:14:45.320 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:14:45.320 EAL: Ask a virtual area of 0x400000000 bytes 00:14:45.320 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:14:45.320 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:14:45.320 EAL: Ask a virtual area of 0x61000 bytes 00:14:45.320 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:14:45.320 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:14:45.320 EAL: Ask a virtual area of 0x400000000 bytes 00:14:45.320 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:14:45.320 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:14:45.320 EAL: Hugepages will be freed exactly as allocated. 00:14:45.320 EAL: No shared files mode enabled, IPC is disabled 00:14:45.320 EAL: No shared files mode enabled, IPC is disabled 00:14:45.320 EAL: TSC frequency is ~2200000 KHz 00:14:45.320 EAL: Main lcore 0 is ready (tid=7f82ce229a40;cpuset=[0]) 00:14:45.320 EAL: Trying to obtain current memory policy. 00:14:45.320 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:45.320 EAL: Restoring previous memory policy: 0 00:14:45.320 EAL: request: mp_malloc_sync 00:14:45.320 EAL: No shared files mode enabled, IPC is disabled 00:14:45.320 EAL: Heap on socket 0 was expanded by 2MB 00:14:45.320 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:14:45.320 EAL: No PCI address specified using 'addr=' in: bus=pci 00:14:45.320 EAL: Mem event callback 'spdk:(nil)' registered 00:14:45.320 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:14:45.579 00:14:45.579 00:14:45.579 CUnit - A unit testing framework for C - Version 2.1-3 00:14:45.579 http://cunit.sourceforge.net/ 00:14:45.579 00:14:45.579 00:14:45.579 Suite: components_suite 00:14:45.837 Test: vtophys_malloc_test ...passed 00:14:45.837 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:14:45.837 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:45.837 EAL: Restoring previous memory policy: 4 00:14:45.837 EAL: Calling mem event callback 'spdk:(nil)' 00:14:45.837 EAL: request: mp_malloc_sync 00:14:45.837 EAL: No shared files mode enabled, IPC is disabled 00:14:45.837 EAL: Heap on socket 0 was expanded by 4MB 00:14:45.837 EAL: Calling mem event callback 'spdk:(nil)' 00:14:45.837 EAL: request: mp_malloc_sync 00:14:45.837 EAL: No shared files mode enabled, IPC is disabled 00:14:45.837 EAL: Heap on socket 0 was shrunk by 4MB 00:14:45.837 EAL: Trying to obtain current memory policy. 00:14:45.837 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:45.837 EAL: Restoring previous memory policy: 4 00:14:45.837 EAL: Calling mem event callback 'spdk:(nil)' 00:14:45.837 EAL: request: mp_malloc_sync 00:14:45.837 EAL: No shared files mode enabled, IPC is disabled 00:14:45.837 EAL: Heap on socket 0 was expanded by 6MB 00:14:45.837 EAL: Calling mem event callback 'spdk:(nil)' 00:14:45.837 EAL: request: mp_malloc_sync 00:14:45.837 EAL: No shared files mode enabled, IPC is disabled 00:14:45.837 EAL: Heap on socket 0 was shrunk by 6MB 00:14:45.838 EAL: Trying to obtain current memory policy. 00:14:45.838 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:45.838 EAL: Restoring previous memory policy: 4 00:14:45.838 EAL: Calling mem event callback 'spdk:(nil)' 00:14:45.838 EAL: request: mp_malloc_sync 00:14:45.838 EAL: No shared files mode enabled, IPC is disabled 00:14:45.838 EAL: Heap on socket 0 was expanded by 10MB 00:14:46.097 EAL: Calling mem event callback 'spdk:(nil)' 00:14:46.097 EAL: request: mp_malloc_sync 00:14:46.097 EAL: No shared files mode enabled, IPC is disabled 00:14:46.097 EAL: Heap on socket 0 was shrunk by 10MB 00:14:46.097 EAL: Trying to obtain current memory policy. 00:14:46.097 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:46.097 EAL: Restoring previous memory policy: 4 00:14:46.097 EAL: Calling mem event callback 'spdk:(nil)' 00:14:46.097 EAL: request: mp_malloc_sync 00:14:46.097 EAL: No shared files mode enabled, IPC is disabled 00:14:46.097 EAL: Heap on socket 0 was expanded by 18MB 00:14:46.097 EAL: Calling mem event callback 'spdk:(nil)' 00:14:46.097 EAL: request: mp_malloc_sync 00:14:46.097 EAL: No shared files mode enabled, IPC is disabled 00:14:46.097 EAL: Heap on socket 0 was shrunk by 18MB 00:14:46.097 EAL: Trying to obtain current memory policy. 00:14:46.097 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:46.097 EAL: Restoring previous memory policy: 4 00:14:46.097 EAL: Calling mem event callback 'spdk:(nil)' 00:14:46.097 EAL: request: mp_malloc_sync 00:14:46.097 EAL: No shared files mode enabled, IPC is disabled 00:14:46.097 EAL: Heap on socket 0 was expanded by 34MB 00:14:46.097 EAL: Calling mem event callback 'spdk:(nil)' 00:14:46.097 EAL: request: mp_malloc_sync 00:14:46.097 EAL: No shared files mode enabled, IPC is disabled 00:14:46.097 EAL: Heap on socket 0 was shrunk by 34MB 00:14:46.097 EAL: Trying to obtain current memory policy. 00:14:46.097 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:46.097 EAL: Restoring previous memory policy: 4 00:14:46.097 EAL: Calling mem event callback 'spdk:(nil)' 00:14:46.097 EAL: request: mp_malloc_sync 00:14:46.097 EAL: No shared files mode enabled, IPC is disabled 00:14:46.097 EAL: Heap on socket 0 was expanded by 66MB 00:14:46.356 EAL: Calling mem event callback 'spdk:(nil)' 00:14:46.356 EAL: request: mp_malloc_sync 00:14:46.356 EAL: No shared files mode enabled, IPC is disabled 00:14:46.356 EAL: Heap on socket 0 was shrunk by 66MB 00:14:46.356 EAL: Trying to obtain current memory policy. 00:14:46.356 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:46.356 EAL: Restoring previous memory policy: 4 00:14:46.356 EAL: Calling mem event callback 'spdk:(nil)' 00:14:46.356 EAL: request: mp_malloc_sync 00:14:46.356 EAL: No shared files mode enabled, IPC is disabled 00:14:46.356 EAL: Heap on socket 0 was expanded by 130MB 00:14:46.614 EAL: Calling mem event callback 'spdk:(nil)' 00:14:46.614 EAL: request: mp_malloc_sync 00:14:46.614 EAL: No shared files mode enabled, IPC is disabled 00:14:46.614 EAL: Heap on socket 0 was shrunk by 130MB 00:14:46.873 EAL: Trying to obtain current memory policy. 00:14:46.873 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:46.873 EAL: Restoring previous memory policy: 4 00:14:46.873 EAL: Calling mem event callback 'spdk:(nil)' 00:14:46.873 EAL: request: mp_malloc_sync 00:14:46.873 EAL: No shared files mode enabled, IPC is disabled 00:14:46.873 EAL: Heap on socket 0 was expanded by 258MB 00:14:47.440 EAL: Calling mem event callback 'spdk:(nil)' 00:14:47.440 EAL: request: mp_malloc_sync 00:14:47.440 EAL: No shared files mode enabled, IPC is disabled 00:14:47.440 EAL: Heap on socket 0 was shrunk by 258MB 00:14:47.708 EAL: Trying to obtain current memory policy. 00:14:47.708 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:47.708 EAL: Restoring previous memory policy: 4 00:14:47.708 EAL: Calling mem event callback 'spdk:(nil)' 00:14:47.708 EAL: request: mp_malloc_sync 00:14:47.708 EAL: No shared files mode enabled, IPC is disabled 00:14:47.708 EAL: Heap on socket 0 was expanded by 514MB 00:14:48.645 EAL: Calling mem event callback 'spdk:(nil)' 00:14:48.645 EAL: request: mp_malloc_sync 00:14:48.645 EAL: No shared files mode enabled, IPC is disabled 00:14:48.645 EAL: Heap on socket 0 was shrunk by 514MB 00:14:49.581 EAL: Trying to obtain current memory policy. 00:14:49.581 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:49.581 EAL: Restoring previous memory policy: 4 00:14:49.581 EAL: Calling mem event callback 'spdk:(nil)' 00:14:49.581 EAL: request: mp_malloc_sync 00:14:49.581 EAL: No shared files mode enabled, IPC is disabled 00:14:49.581 EAL: Heap on socket 0 was expanded by 1026MB 00:14:51.486 EAL: Calling mem event callback 'spdk:(nil)' 00:14:51.486 EAL: request: mp_malloc_sync 00:14:51.486 EAL: No shared files mode enabled, IPC is disabled 00:14:51.486 EAL: Heap on socket 0 was shrunk by 1026MB 00:14:53.389 passed 00:14:53.389 00:14:53.389 Run Summary: Type Total Ran Passed Failed Inactive 00:14:53.389 suites 1 1 n/a 0 0 00:14:53.389 tests 2 2 2 0 0 00:14:53.389 asserts 5684 5684 5684 0 n/a 00:14:53.389 00:14:53.389 Elapsed time = 7.510 seconds 00:14:53.389 EAL: Calling mem event callback 'spdk:(nil)' 00:14:53.389 EAL: request: mp_malloc_sync 00:14:53.389 EAL: No shared files mode enabled, IPC is disabled 00:14:53.389 EAL: Heap on socket 0 was shrunk by 2MB 00:14:53.389 EAL: No shared files mode enabled, IPC is disabled 00:14:53.389 EAL: No shared files mode enabled, IPC is disabled 00:14:53.389 EAL: No shared files mode enabled, IPC is disabled 00:14:53.389 00:14:53.389 real 0m7.906s 00:14:53.389 user 0m6.681s 00:14:53.389 sys 0m1.054s 00:14:53.389 11:28:58 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:53.389 11:28:58 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:14:53.389 ************************************ 00:14:53.389 END TEST env_vtophys 00:14:53.389 ************************************ 00:14:53.389 11:28:58 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:14:53.389 11:28:58 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:53.389 11:28:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:53.389 11:28:58 env -- common/autotest_common.sh@10 -- # set +x 00:14:53.389 ************************************ 00:14:53.389 START TEST env_pci 00:14:53.389 ************************************ 00:14:53.389 11:28:58 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:14:53.389 00:14:53.389 00:14:53.389 CUnit - A unit testing framework for C - Version 2.1-3 00:14:53.389 http://cunit.sourceforge.net/ 00:14:53.389 00:14:53.389 00:14:53.389 Suite: pci 00:14:53.389 Test: pci_hook ...[2024-11-20 11:28:58.801071] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57884 has claimed it 00:14:53.389 passed 00:14:53.389 00:14:53.389 EAL: Cannot find device (10000:00:01.0) 00:14:53.389 EAL: Failed to attach device on primary process 00:14:53.389 Run Summary: Type Total Ran Passed Failed Inactive 00:14:53.389 suites 1 1 n/a 0 0 00:14:53.389 tests 1 1 1 0 0 00:14:53.389 asserts 25 25 25 0 n/a 00:14:53.389 00:14:53.389 Elapsed time = 0.008 seconds 00:14:53.389 00:14:53.389 real 0m0.085s 00:14:53.389 user 0m0.039s 00:14:53.389 sys 0m0.045s 00:14:53.389 11:28:58 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:53.389 11:28:58 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:14:53.389 ************************************ 00:14:53.389 END TEST env_pci 00:14:53.389 ************************************ 00:14:53.389 11:28:58 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:14:53.389 11:28:58 env -- env/env.sh@15 -- # uname 00:14:53.389 11:28:58 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:14:53.389 11:28:58 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:14:53.389 11:28:58 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:14:53.389 11:28:58 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:53.389 11:28:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:53.389 11:28:58 env -- common/autotest_common.sh@10 -- # set +x 00:14:53.389 ************************************ 00:14:53.389 START TEST env_dpdk_post_init 00:14:53.389 ************************************ 00:14:53.389 11:28:58 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:14:53.389 EAL: Detected CPU lcores: 10 00:14:53.389 EAL: Detected NUMA nodes: 1 00:14:53.389 EAL: Detected shared linkage of DPDK 00:14:53.389 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:14:53.389 EAL: Selected IOVA mode 'PA' 00:14:53.389 TELEMETRY: No legacy callbacks, legacy socket not created 00:14:53.389 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:14:53.389 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:14:53.650 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:14:53.650 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:14:53.650 Starting DPDK initialization... 00:14:53.650 Starting SPDK post initialization... 00:14:53.650 SPDK NVMe probe 00:14:53.650 Attaching to 0000:00:10.0 00:14:53.650 Attaching to 0000:00:11.0 00:14:53.650 Attaching to 0000:00:12.0 00:14:53.650 Attaching to 0000:00:13.0 00:14:53.650 Attached to 0000:00:10.0 00:14:53.650 Attached to 0000:00:11.0 00:14:53.650 Attached to 0000:00:13.0 00:14:53.650 Attached to 0000:00:12.0 00:14:53.650 Cleaning up... 00:14:53.650 00:14:53.650 real 0m0.342s 00:14:53.650 user 0m0.132s 00:14:53.650 sys 0m0.109s 00:14:53.650 11:28:59 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:53.650 11:28:59 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:14:53.650 ************************************ 00:14:53.650 END TEST env_dpdk_post_init 00:14:53.650 ************************************ 00:14:53.650 11:28:59 env -- env/env.sh@26 -- # uname 00:14:53.650 11:28:59 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:14:53.650 11:28:59 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:14:53.650 11:28:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:53.650 11:28:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:53.650 11:28:59 env -- common/autotest_common.sh@10 -- # set +x 00:14:53.650 ************************************ 00:14:53.650 START TEST env_mem_callbacks 00:14:53.650 ************************************ 00:14:53.650 11:28:59 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:14:53.650 EAL: Detected CPU lcores: 10 00:14:53.650 EAL: Detected NUMA nodes: 1 00:14:53.650 EAL: Detected shared linkage of DPDK 00:14:53.650 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:14:53.650 EAL: Selected IOVA mode 'PA' 00:14:53.908 TELEMETRY: No legacy callbacks, legacy socket not created 00:14:53.908 00:14:53.908 00:14:53.908 CUnit - A unit testing framework for C - Version 2.1-3 00:14:53.909 http://cunit.sourceforge.net/ 00:14:53.909 00:14:53.909 00:14:53.909 Suite: memory 00:14:53.909 Test: test ... 00:14:53.909 register 0x200000200000 2097152 00:14:53.909 malloc 3145728 00:14:53.909 register 0x200000400000 4194304 00:14:53.909 buf 0x2000004fffc0 len 3145728 PASSED 00:14:53.909 malloc 64 00:14:53.909 buf 0x2000004ffec0 len 64 PASSED 00:14:53.909 malloc 4194304 00:14:53.909 register 0x200000800000 6291456 00:14:53.909 buf 0x2000009fffc0 len 4194304 PASSED 00:14:53.909 free 0x2000004fffc0 3145728 00:14:53.909 free 0x2000004ffec0 64 00:14:53.909 unregister 0x200000400000 4194304 PASSED 00:14:53.909 free 0x2000009fffc0 4194304 00:14:53.909 unregister 0x200000800000 6291456 PASSED 00:14:53.909 malloc 8388608 00:14:53.909 register 0x200000400000 10485760 00:14:53.909 buf 0x2000005fffc0 len 8388608 PASSED 00:14:53.909 free 0x2000005fffc0 8388608 00:14:53.909 unregister 0x200000400000 10485760 PASSED 00:14:53.909 passed 00:14:53.909 00:14:53.909 Run Summary: Type Total Ran Passed Failed Inactive 00:14:53.909 suites 1 1 n/a 0 0 00:14:53.909 tests 1 1 1 0 0 00:14:53.909 asserts 15 15 15 0 n/a 00:14:53.909 00:14:53.909 Elapsed time = 0.076 seconds 00:14:53.909 00:14:53.909 real 0m0.286s 00:14:53.909 user 0m0.113s 00:14:53.909 sys 0m0.071s 00:14:53.909 11:28:59 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:53.909 11:28:59 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:14:53.909 ************************************ 00:14:53.909 END TEST env_mem_callbacks 00:14:53.909 ************************************ 00:14:53.909 00:14:53.909 real 0m9.380s 00:14:53.909 user 0m7.459s 00:14:53.909 sys 0m1.533s 00:14:53.909 11:28:59 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:53.909 11:28:59 env -- common/autotest_common.sh@10 -- # set +x 00:14:53.909 ************************************ 00:14:53.909 END TEST env 00:14:53.909 ************************************ 00:14:53.909 11:28:59 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:14:53.909 11:28:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:53.909 11:28:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:53.909 11:28:59 -- common/autotest_common.sh@10 -- # set +x 00:14:53.909 ************************************ 00:14:53.909 START TEST rpc 00:14:53.909 ************************************ 00:14:53.909 11:28:59 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:14:54.168 * Looking for test storage... 00:14:54.168 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:14:54.168 11:28:59 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:54.168 11:28:59 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:14:54.168 11:28:59 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:54.168 11:28:59 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:54.168 11:28:59 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:54.168 11:28:59 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:54.168 11:28:59 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:54.168 11:28:59 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:14:54.168 11:28:59 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:14:54.168 11:28:59 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:14:54.168 11:28:59 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:14:54.169 11:28:59 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:14:54.169 11:28:59 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:14:54.169 11:28:59 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:14:54.169 11:28:59 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:54.169 11:28:59 rpc -- scripts/common.sh@344 -- # case "$op" in 00:14:54.169 11:28:59 rpc -- scripts/common.sh@345 -- # : 1 00:14:54.169 11:28:59 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:54.169 11:28:59 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:54.169 11:28:59 rpc -- scripts/common.sh@365 -- # decimal 1 00:14:54.169 11:28:59 rpc -- scripts/common.sh@353 -- # local d=1 00:14:54.169 11:28:59 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:54.169 11:28:59 rpc -- scripts/common.sh@355 -- # echo 1 00:14:54.169 11:28:59 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:14:54.169 11:28:59 rpc -- scripts/common.sh@366 -- # decimal 2 00:14:54.169 11:28:59 rpc -- scripts/common.sh@353 -- # local d=2 00:14:54.169 11:28:59 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:54.169 11:28:59 rpc -- scripts/common.sh@355 -- # echo 2 00:14:54.169 11:28:59 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:14:54.169 11:28:59 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:54.169 11:28:59 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:54.169 11:28:59 rpc -- scripts/common.sh@368 -- # return 0 00:14:54.169 11:28:59 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:54.169 11:28:59 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:54.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.169 --rc genhtml_branch_coverage=1 00:14:54.169 --rc genhtml_function_coverage=1 00:14:54.169 --rc genhtml_legend=1 00:14:54.169 --rc geninfo_all_blocks=1 00:14:54.169 --rc geninfo_unexecuted_blocks=1 00:14:54.169 00:14:54.169 ' 00:14:54.169 11:28:59 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:54.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.169 --rc genhtml_branch_coverage=1 00:14:54.169 --rc genhtml_function_coverage=1 00:14:54.169 --rc genhtml_legend=1 00:14:54.169 --rc geninfo_all_blocks=1 00:14:54.169 --rc geninfo_unexecuted_blocks=1 00:14:54.169 00:14:54.169 ' 00:14:54.169 11:28:59 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:54.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.169 --rc genhtml_branch_coverage=1 00:14:54.169 --rc genhtml_function_coverage=1 00:14:54.169 --rc genhtml_legend=1 00:14:54.169 --rc geninfo_all_blocks=1 00:14:54.169 --rc geninfo_unexecuted_blocks=1 00:14:54.169 00:14:54.169 ' 00:14:54.169 11:28:59 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:54.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.169 --rc genhtml_branch_coverage=1 00:14:54.169 --rc genhtml_function_coverage=1 00:14:54.169 --rc genhtml_legend=1 00:14:54.169 --rc geninfo_all_blocks=1 00:14:54.169 --rc geninfo_unexecuted_blocks=1 00:14:54.169 00:14:54.169 ' 00:14:54.169 11:28:59 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58011 00:14:54.169 11:28:59 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:14:54.169 11:28:59 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:14:54.169 11:28:59 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58011 00:14:54.169 11:28:59 rpc -- common/autotest_common.sh@835 -- # '[' -z 58011 ']' 00:14:54.169 11:28:59 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.169 11:28:59 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:54.169 11:28:59 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.169 11:28:59 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:54.169 11:28:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:14:54.428 [2024-11-20 11:29:00.014723] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:14:54.429 [2024-11-20 11:29:00.014923] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58011 ] 00:14:54.687 [2024-11-20 11:29:00.208213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.687 [2024-11-20 11:29:00.362435] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:14:54.687 [2024-11-20 11:29:00.362508] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58011' to capture a snapshot of events at runtime. 00:14:54.687 [2024-11-20 11:29:00.362530] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:54.687 [2024-11-20 11:29:00.362569] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:54.687 [2024-11-20 11:29:00.362584] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58011 for offline analysis/debug. 00:14:54.687 [2024-11-20 11:29:00.364157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.624 11:29:01 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:55.624 11:29:01 rpc -- common/autotest_common.sh@868 -- # return 0 00:14:55.624 11:29:01 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:14:55.624 11:29:01 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:14:55.624 11:29:01 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:14:55.624 11:29:01 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:14:55.624 11:29:01 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:55.624 11:29:01 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:55.624 11:29:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:14:55.624 ************************************ 00:14:55.624 START TEST rpc_integrity 00:14:55.624 ************************************ 00:14:55.624 11:29:01 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:14:55.624 11:29:01 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:55.624 11:29:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.624 11:29:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:55.624 11:29:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.624 11:29:01 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:14:55.624 11:29:01 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:14:55.624 11:29:01 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:14:55.624 11:29:01 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:14:55.624 11:29:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.624 11:29:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:55.624 11:29:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.624 11:29:01 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:14:55.624 11:29:01 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:14:55.624 11:29:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.624 11:29:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:55.624 11:29:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.624 11:29:01 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:14:55.624 { 00:14:55.624 "name": "Malloc0", 00:14:55.624 "aliases": [ 00:14:55.624 "89755762-ac3b-4ff8-841b-0f2b342c7f7b" 00:14:55.624 ], 00:14:55.624 "product_name": "Malloc disk", 00:14:55.624 "block_size": 512, 00:14:55.624 "num_blocks": 16384, 00:14:55.624 "uuid": "89755762-ac3b-4ff8-841b-0f2b342c7f7b", 00:14:55.624 "assigned_rate_limits": { 00:14:55.624 "rw_ios_per_sec": 0, 00:14:55.624 "rw_mbytes_per_sec": 0, 00:14:55.624 "r_mbytes_per_sec": 0, 00:14:55.624 "w_mbytes_per_sec": 0 00:14:55.624 }, 00:14:55.624 "claimed": false, 00:14:55.624 "zoned": false, 00:14:55.624 "supported_io_types": { 00:14:55.624 "read": true, 00:14:55.624 "write": true, 00:14:55.624 "unmap": true, 00:14:55.624 "flush": true, 00:14:55.624 "reset": true, 00:14:55.624 "nvme_admin": false, 00:14:55.624 "nvme_io": false, 00:14:55.624 "nvme_io_md": false, 00:14:55.624 "write_zeroes": true, 00:14:55.624 "zcopy": true, 00:14:55.624 "get_zone_info": false, 00:14:55.624 "zone_management": false, 00:14:55.624 "zone_append": false, 00:14:55.624 "compare": false, 00:14:55.624 "compare_and_write": false, 00:14:55.624 "abort": true, 00:14:55.624 "seek_hole": false, 00:14:55.624 "seek_data": false, 00:14:55.624 "copy": true, 00:14:55.624 "nvme_iov_md": false 00:14:55.624 }, 00:14:55.624 "memory_domains": [ 00:14:55.624 { 00:14:55.624 "dma_device_id": "system", 00:14:55.624 "dma_device_type": 1 00:14:55.624 }, 00:14:55.624 { 00:14:55.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.624 "dma_device_type": 2 00:14:55.624 } 00:14:55.624 ], 00:14:55.624 "driver_specific": {} 00:14:55.624 } 00:14:55.624 ]' 00:14:55.624 11:29:01 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:14:55.883 11:29:01 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:14:55.883 11:29:01 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:14:55.883 11:29:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.883 11:29:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:55.883 [2024-11-20 11:29:01.404777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:14:55.883 [2024-11-20 11:29:01.404866] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.883 [2024-11-20 11:29:01.404917] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:55.883 [2024-11-20 11:29:01.404938] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.883 [2024-11-20 11:29:01.408122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.883 [2024-11-20 11:29:01.408177] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:14:55.883 Passthru0 00:14:55.883 11:29:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.883 11:29:01 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:14:55.883 11:29:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.883 11:29:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:55.883 11:29:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.883 11:29:01 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:14:55.883 { 00:14:55.883 "name": "Malloc0", 00:14:55.883 "aliases": [ 00:14:55.883 "89755762-ac3b-4ff8-841b-0f2b342c7f7b" 00:14:55.883 ], 00:14:55.883 "product_name": "Malloc disk", 00:14:55.883 "block_size": 512, 00:14:55.883 "num_blocks": 16384, 00:14:55.883 "uuid": "89755762-ac3b-4ff8-841b-0f2b342c7f7b", 00:14:55.883 "assigned_rate_limits": { 00:14:55.883 "rw_ios_per_sec": 0, 00:14:55.883 "rw_mbytes_per_sec": 0, 00:14:55.883 "r_mbytes_per_sec": 0, 00:14:55.883 "w_mbytes_per_sec": 0 00:14:55.883 }, 00:14:55.883 "claimed": true, 00:14:55.883 "claim_type": "exclusive_write", 00:14:55.883 "zoned": false, 00:14:55.883 "supported_io_types": { 00:14:55.883 "read": true, 00:14:55.883 "write": true, 00:14:55.883 "unmap": true, 00:14:55.883 "flush": true, 00:14:55.883 "reset": true, 00:14:55.883 "nvme_admin": false, 00:14:55.883 "nvme_io": false, 00:14:55.883 "nvme_io_md": false, 00:14:55.883 "write_zeroes": true, 00:14:55.883 "zcopy": true, 00:14:55.883 "get_zone_info": false, 00:14:55.883 "zone_management": false, 00:14:55.883 "zone_append": false, 00:14:55.883 "compare": false, 00:14:55.883 "compare_and_write": false, 00:14:55.883 "abort": true, 00:14:55.883 "seek_hole": false, 00:14:55.884 "seek_data": false, 00:14:55.884 "copy": true, 00:14:55.884 "nvme_iov_md": false 00:14:55.884 }, 00:14:55.884 "memory_domains": [ 00:14:55.884 { 00:14:55.884 "dma_device_id": "system", 00:14:55.884 "dma_device_type": 1 00:14:55.884 }, 00:14:55.884 { 00:14:55.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.884 "dma_device_type": 2 00:14:55.884 } 00:14:55.884 ], 00:14:55.884 "driver_specific": {} 00:14:55.884 }, 00:14:55.884 { 00:14:55.884 "name": "Passthru0", 00:14:55.884 "aliases": [ 00:14:55.884 "afb1dec1-35d6-5f09-8052-8c1d73e9753d" 00:14:55.884 ], 00:14:55.884 "product_name": "passthru", 00:14:55.884 "block_size": 512, 00:14:55.884 "num_blocks": 16384, 00:14:55.884 "uuid": "afb1dec1-35d6-5f09-8052-8c1d73e9753d", 00:14:55.884 "assigned_rate_limits": { 00:14:55.884 "rw_ios_per_sec": 0, 00:14:55.884 "rw_mbytes_per_sec": 0, 00:14:55.884 "r_mbytes_per_sec": 0, 00:14:55.884 "w_mbytes_per_sec": 0 00:14:55.884 }, 00:14:55.884 "claimed": false, 00:14:55.884 "zoned": false, 00:14:55.884 "supported_io_types": { 00:14:55.884 "read": true, 00:14:55.884 "write": true, 00:14:55.884 "unmap": true, 00:14:55.884 "flush": true, 00:14:55.884 "reset": true, 00:14:55.884 "nvme_admin": false, 00:14:55.884 "nvme_io": false, 00:14:55.884 "nvme_io_md": false, 00:14:55.884 "write_zeroes": true, 00:14:55.884 "zcopy": true, 00:14:55.884 "get_zone_info": false, 00:14:55.884 "zone_management": false, 00:14:55.884 "zone_append": false, 00:14:55.884 "compare": false, 00:14:55.884 "compare_and_write": false, 00:14:55.884 "abort": true, 00:14:55.884 "seek_hole": false, 00:14:55.884 "seek_data": false, 00:14:55.884 "copy": true, 00:14:55.884 "nvme_iov_md": false 00:14:55.884 }, 00:14:55.884 "memory_domains": [ 00:14:55.884 { 00:14:55.884 "dma_device_id": "system", 00:14:55.884 "dma_device_type": 1 00:14:55.884 }, 00:14:55.884 { 00:14:55.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.884 "dma_device_type": 2 00:14:55.884 } 00:14:55.884 ], 00:14:55.884 "driver_specific": { 00:14:55.884 "passthru": { 00:14:55.884 "name": "Passthru0", 00:14:55.884 "base_bdev_name": "Malloc0" 00:14:55.884 } 00:14:55.884 } 00:14:55.884 } 00:14:55.884 ]' 00:14:55.884 11:29:01 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:14:55.884 11:29:01 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:14:55.884 11:29:01 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:14:55.884 11:29:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.884 11:29:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:55.884 11:29:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.884 11:29:01 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:55.884 11:29:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.884 11:29:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:55.884 11:29:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.884 11:29:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:55.884 11:29:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.884 11:29:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:55.884 11:29:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.884 11:29:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:14:55.884 11:29:01 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:14:55.884 11:29:01 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:14:55.884 00:14:55.884 real 0m0.359s 00:14:55.884 user 0m0.218s 00:14:55.884 sys 0m0.043s 00:14:55.884 11:29:01 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:55.884 11:29:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:55.884 ************************************ 00:14:55.884 END TEST rpc_integrity 00:14:55.884 ************************************ 00:14:55.884 11:29:01 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:14:55.884 11:29:01 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:55.884 11:29:01 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:55.884 11:29:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.143 ************************************ 00:14:56.143 START TEST rpc_plugins 00:14:56.143 ************************************ 00:14:56.143 11:29:01 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:14:56.143 11:29:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:14:56.143 11:29:01 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.143 11:29:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:14:56.143 11:29:01 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.143 11:29:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:14:56.143 11:29:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:14:56.143 11:29:01 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.143 11:29:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:14:56.143 11:29:01 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.143 11:29:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:14:56.143 { 00:14:56.143 "name": "Malloc1", 00:14:56.143 "aliases": [ 00:14:56.143 "42f33825-66de-4c0c-a279-07483e495d62" 00:14:56.143 ], 00:14:56.143 "product_name": "Malloc disk", 00:14:56.143 "block_size": 4096, 00:14:56.143 "num_blocks": 256, 00:14:56.143 "uuid": "42f33825-66de-4c0c-a279-07483e495d62", 00:14:56.143 "assigned_rate_limits": { 00:14:56.143 "rw_ios_per_sec": 0, 00:14:56.143 "rw_mbytes_per_sec": 0, 00:14:56.143 "r_mbytes_per_sec": 0, 00:14:56.143 "w_mbytes_per_sec": 0 00:14:56.143 }, 00:14:56.143 "claimed": false, 00:14:56.143 "zoned": false, 00:14:56.143 "supported_io_types": { 00:14:56.143 "read": true, 00:14:56.143 "write": true, 00:14:56.143 "unmap": true, 00:14:56.143 "flush": true, 00:14:56.143 "reset": true, 00:14:56.143 "nvme_admin": false, 00:14:56.143 "nvme_io": false, 00:14:56.143 "nvme_io_md": false, 00:14:56.143 "write_zeroes": true, 00:14:56.143 "zcopy": true, 00:14:56.143 "get_zone_info": false, 00:14:56.143 "zone_management": false, 00:14:56.143 "zone_append": false, 00:14:56.143 "compare": false, 00:14:56.143 "compare_and_write": false, 00:14:56.143 "abort": true, 00:14:56.143 "seek_hole": false, 00:14:56.143 "seek_data": false, 00:14:56.143 "copy": true, 00:14:56.143 "nvme_iov_md": false 00:14:56.143 }, 00:14:56.143 "memory_domains": [ 00:14:56.143 { 00:14:56.143 "dma_device_id": "system", 00:14:56.143 "dma_device_type": 1 00:14:56.143 }, 00:14:56.143 { 00:14:56.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.143 "dma_device_type": 2 00:14:56.143 } 00:14:56.143 ], 00:14:56.143 "driver_specific": {} 00:14:56.143 } 00:14:56.143 ]' 00:14:56.143 11:29:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:14:56.143 11:29:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:14:56.143 11:29:01 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:14:56.143 11:29:01 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.143 11:29:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:14:56.143 11:29:01 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.143 11:29:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:14:56.143 11:29:01 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.143 11:29:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:14:56.143 11:29:01 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.143 11:29:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:14:56.143 11:29:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:14:56.143 11:29:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:14:56.143 00:14:56.143 real 0m0.170s 00:14:56.143 user 0m0.100s 00:14:56.143 sys 0m0.024s 00:14:56.143 11:29:01 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:56.143 ************************************ 00:14:56.143 END TEST rpc_plugins 00:14:56.143 ************************************ 00:14:56.143 11:29:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:14:56.143 11:29:01 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:14:56.143 11:29:01 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:56.143 11:29:01 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:56.143 11:29:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.143 ************************************ 00:14:56.143 START TEST rpc_trace_cmd_test 00:14:56.143 ************************************ 00:14:56.143 11:29:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:14:56.143 11:29:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:14:56.143 11:29:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:14:56.143 11:29:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.143 11:29:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.143 11:29:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.143 11:29:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:14:56.143 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58011", 00:14:56.143 "tpoint_group_mask": "0x8", 00:14:56.143 "iscsi_conn": { 00:14:56.143 "mask": "0x2", 00:14:56.143 "tpoint_mask": "0x0" 00:14:56.143 }, 00:14:56.143 "scsi": { 00:14:56.143 "mask": "0x4", 00:14:56.143 "tpoint_mask": "0x0" 00:14:56.143 }, 00:14:56.143 "bdev": { 00:14:56.143 "mask": "0x8", 00:14:56.143 "tpoint_mask": "0xffffffffffffffff" 00:14:56.143 }, 00:14:56.143 "nvmf_rdma": { 00:14:56.143 "mask": "0x10", 00:14:56.143 "tpoint_mask": "0x0" 00:14:56.143 }, 00:14:56.143 "nvmf_tcp": { 00:14:56.143 "mask": "0x20", 00:14:56.143 "tpoint_mask": "0x0" 00:14:56.143 }, 00:14:56.143 "ftl": { 00:14:56.143 "mask": "0x40", 00:14:56.143 "tpoint_mask": "0x0" 00:14:56.143 }, 00:14:56.143 "blobfs": { 00:14:56.143 "mask": "0x80", 00:14:56.143 "tpoint_mask": "0x0" 00:14:56.143 }, 00:14:56.143 "dsa": { 00:14:56.143 "mask": "0x200", 00:14:56.143 "tpoint_mask": "0x0" 00:14:56.143 }, 00:14:56.143 "thread": { 00:14:56.143 "mask": "0x400", 00:14:56.143 "tpoint_mask": "0x0" 00:14:56.143 }, 00:14:56.143 "nvme_pcie": { 00:14:56.143 "mask": "0x800", 00:14:56.143 "tpoint_mask": "0x0" 00:14:56.143 }, 00:14:56.143 "iaa": { 00:14:56.143 "mask": "0x1000", 00:14:56.143 "tpoint_mask": "0x0" 00:14:56.143 }, 00:14:56.143 "nvme_tcp": { 00:14:56.143 "mask": "0x2000", 00:14:56.143 "tpoint_mask": "0x0" 00:14:56.143 }, 00:14:56.143 "bdev_nvme": { 00:14:56.143 "mask": "0x4000", 00:14:56.143 "tpoint_mask": "0x0" 00:14:56.143 }, 00:14:56.143 "sock": { 00:14:56.143 "mask": "0x8000", 00:14:56.143 "tpoint_mask": "0x0" 00:14:56.143 }, 00:14:56.143 "blob": { 00:14:56.143 "mask": "0x10000", 00:14:56.143 "tpoint_mask": "0x0" 00:14:56.143 }, 00:14:56.143 "bdev_raid": { 00:14:56.143 "mask": "0x20000", 00:14:56.143 "tpoint_mask": "0x0" 00:14:56.143 }, 00:14:56.143 "scheduler": { 00:14:56.143 "mask": "0x40000", 00:14:56.143 "tpoint_mask": "0x0" 00:14:56.143 } 00:14:56.143 }' 00:14:56.143 11:29:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:14:56.403 11:29:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:14:56.403 11:29:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:14:56.403 11:29:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:14:56.403 11:29:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:14:56.403 11:29:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:14:56.403 11:29:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:14:56.403 11:29:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:14:56.403 11:29:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:14:56.662 11:29:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:14:56.662 00:14:56.662 real 0m0.311s 00:14:56.662 user 0m0.276s 00:14:56.662 sys 0m0.027s 00:14:56.662 11:29:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:56.662 11:29:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.662 ************************************ 00:14:56.662 END TEST rpc_trace_cmd_test 00:14:56.662 ************************************ 00:14:56.662 11:29:02 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:14:56.662 11:29:02 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:14:56.662 11:29:02 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:14:56.662 11:29:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:56.662 11:29:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:56.662 11:29:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.662 ************************************ 00:14:56.662 START TEST rpc_daemon_integrity 00:14:56.662 ************************************ 00:14:56.663 11:29:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:14:56.663 11:29:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:56.663 11:29:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.663 11:29:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:56.663 11:29:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.663 11:29:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:14:56.663 11:29:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:14:56.663 11:29:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:14:56.663 11:29:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:14:56.663 11:29:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.663 11:29:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:56.663 11:29:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.663 11:29:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:14:56.663 11:29:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:14:56.663 11:29:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.663 11:29:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:56.663 11:29:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.663 11:29:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:14:56.663 { 00:14:56.663 "name": "Malloc2", 00:14:56.663 "aliases": [ 00:14:56.663 "703196a9-2e1b-4c8a-9bdd-e386c495cdd7" 00:14:56.663 ], 00:14:56.663 "product_name": "Malloc disk", 00:14:56.663 "block_size": 512, 00:14:56.663 "num_blocks": 16384, 00:14:56.663 "uuid": "703196a9-2e1b-4c8a-9bdd-e386c495cdd7", 00:14:56.663 "assigned_rate_limits": { 00:14:56.663 "rw_ios_per_sec": 0, 00:14:56.663 "rw_mbytes_per_sec": 0, 00:14:56.663 "r_mbytes_per_sec": 0, 00:14:56.663 "w_mbytes_per_sec": 0 00:14:56.663 }, 00:14:56.663 "claimed": false, 00:14:56.663 "zoned": false, 00:14:56.663 "supported_io_types": { 00:14:56.663 "read": true, 00:14:56.663 "write": true, 00:14:56.663 "unmap": true, 00:14:56.663 "flush": true, 00:14:56.663 "reset": true, 00:14:56.663 "nvme_admin": false, 00:14:56.663 "nvme_io": false, 00:14:56.663 "nvme_io_md": false, 00:14:56.663 "write_zeroes": true, 00:14:56.663 "zcopy": true, 00:14:56.663 "get_zone_info": false, 00:14:56.663 "zone_management": false, 00:14:56.663 "zone_append": false, 00:14:56.663 "compare": false, 00:14:56.663 "compare_and_write": false, 00:14:56.663 "abort": true, 00:14:56.663 "seek_hole": false, 00:14:56.663 "seek_data": false, 00:14:56.663 "copy": true, 00:14:56.663 "nvme_iov_md": false 00:14:56.663 }, 00:14:56.663 "memory_domains": [ 00:14:56.663 { 00:14:56.663 "dma_device_id": "system", 00:14:56.663 "dma_device_type": 1 00:14:56.663 }, 00:14:56.663 { 00:14:56.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.663 "dma_device_type": 2 00:14:56.663 } 00:14:56.663 ], 00:14:56.663 "driver_specific": {} 00:14:56.663 } 00:14:56.663 ]' 00:14:56.663 11:29:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:14:56.663 11:29:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:14:56.663 11:29:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:14:56.663 11:29:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.663 11:29:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:56.663 [2024-11-20 11:29:02.370790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:14:56.663 [2024-11-20 11:29:02.370860] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.663 [2024-11-20 11:29:02.370890] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:56.663 [2024-11-20 11:29:02.370909] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.663 [2024-11-20 11:29:02.373849] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.663 [2024-11-20 11:29:02.373901] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:14:56.663 Passthru0 00:14:56.663 11:29:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.663 11:29:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:14:56.663 11:29:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.663 11:29:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:56.663 11:29:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.663 11:29:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:14:56.663 { 00:14:56.663 "name": "Malloc2", 00:14:56.663 "aliases": [ 00:14:56.663 "703196a9-2e1b-4c8a-9bdd-e386c495cdd7" 00:14:56.663 ], 00:14:56.663 "product_name": "Malloc disk", 00:14:56.663 "block_size": 512, 00:14:56.663 "num_blocks": 16384, 00:14:56.663 "uuid": "703196a9-2e1b-4c8a-9bdd-e386c495cdd7", 00:14:56.663 "assigned_rate_limits": { 00:14:56.663 "rw_ios_per_sec": 0, 00:14:56.663 "rw_mbytes_per_sec": 0, 00:14:56.663 "r_mbytes_per_sec": 0, 00:14:56.663 "w_mbytes_per_sec": 0 00:14:56.663 }, 00:14:56.663 "claimed": true, 00:14:56.663 "claim_type": "exclusive_write", 00:14:56.663 "zoned": false, 00:14:56.663 "supported_io_types": { 00:14:56.663 "read": true, 00:14:56.663 "write": true, 00:14:56.663 "unmap": true, 00:14:56.663 "flush": true, 00:14:56.663 "reset": true, 00:14:56.663 "nvme_admin": false, 00:14:56.663 "nvme_io": false, 00:14:56.663 "nvme_io_md": false, 00:14:56.663 "write_zeroes": true, 00:14:56.663 "zcopy": true, 00:14:56.663 "get_zone_info": false, 00:14:56.663 "zone_management": false, 00:14:56.663 "zone_append": false, 00:14:56.663 "compare": false, 00:14:56.663 "compare_and_write": false, 00:14:56.663 "abort": true, 00:14:56.663 "seek_hole": false, 00:14:56.663 "seek_data": false, 00:14:56.663 "copy": true, 00:14:56.663 "nvme_iov_md": false 00:14:56.663 }, 00:14:56.663 "memory_domains": [ 00:14:56.663 { 00:14:56.663 "dma_device_id": "system", 00:14:56.663 "dma_device_type": 1 00:14:56.663 }, 00:14:56.663 { 00:14:56.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.663 "dma_device_type": 2 00:14:56.663 } 00:14:56.663 ], 00:14:56.663 "driver_specific": {} 00:14:56.663 }, 00:14:56.663 { 00:14:56.663 "name": "Passthru0", 00:14:56.663 "aliases": [ 00:14:56.663 "4d20acbf-9c41-550b-9148-e288dc166b2b" 00:14:56.663 ], 00:14:56.663 "product_name": "passthru", 00:14:56.663 "block_size": 512, 00:14:56.663 "num_blocks": 16384, 00:14:56.663 "uuid": "4d20acbf-9c41-550b-9148-e288dc166b2b", 00:14:56.663 "assigned_rate_limits": { 00:14:56.663 "rw_ios_per_sec": 0, 00:14:56.663 "rw_mbytes_per_sec": 0, 00:14:56.663 "r_mbytes_per_sec": 0, 00:14:56.663 "w_mbytes_per_sec": 0 00:14:56.663 }, 00:14:56.663 "claimed": false, 00:14:56.663 "zoned": false, 00:14:56.663 "supported_io_types": { 00:14:56.663 "read": true, 00:14:56.663 "write": true, 00:14:56.663 "unmap": true, 00:14:56.663 "flush": true, 00:14:56.663 "reset": true, 00:14:56.663 "nvme_admin": false, 00:14:56.663 "nvme_io": false, 00:14:56.663 "nvme_io_md": false, 00:14:56.663 "write_zeroes": true, 00:14:56.663 "zcopy": true, 00:14:56.663 "get_zone_info": false, 00:14:56.663 "zone_management": false, 00:14:56.663 "zone_append": false, 00:14:56.663 "compare": false, 00:14:56.663 "compare_and_write": false, 00:14:56.663 "abort": true, 00:14:56.663 "seek_hole": false, 00:14:56.663 "seek_data": false, 00:14:56.663 "copy": true, 00:14:56.663 "nvme_iov_md": false 00:14:56.663 }, 00:14:56.663 "memory_domains": [ 00:14:56.663 { 00:14:56.663 "dma_device_id": "system", 00:14:56.663 "dma_device_type": 1 00:14:56.663 }, 00:14:56.663 { 00:14:56.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.663 "dma_device_type": 2 00:14:56.663 } 00:14:56.663 ], 00:14:56.663 "driver_specific": { 00:14:56.663 "passthru": { 00:14:56.663 "name": "Passthru0", 00:14:56.663 "base_bdev_name": "Malloc2" 00:14:56.663 } 00:14:56.663 } 00:14:56.663 } 00:14:56.663 ]' 00:14:56.663 11:29:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:14:56.922 11:29:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:14:56.922 11:29:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:14:56.922 11:29:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.922 11:29:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:56.923 11:29:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.923 11:29:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:56.923 11:29:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.923 11:29:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:56.923 11:29:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.923 11:29:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:56.923 11:29:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.923 11:29:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:56.923 11:29:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.923 11:29:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:14:56.923 11:29:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:14:56.923 11:29:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:14:56.923 00:14:56.923 real 0m0.327s 00:14:56.923 user 0m0.201s 00:14:56.923 sys 0m0.033s 00:14:56.923 11:29:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:56.923 11:29:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:56.923 ************************************ 00:14:56.923 END TEST rpc_daemon_integrity 00:14:56.923 ************************************ 00:14:56.923 11:29:02 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:14:56.923 11:29:02 rpc -- rpc/rpc.sh@84 -- # killprocess 58011 00:14:56.923 11:29:02 rpc -- common/autotest_common.sh@954 -- # '[' -z 58011 ']' 00:14:56.923 11:29:02 rpc -- common/autotest_common.sh@958 -- # kill -0 58011 00:14:56.923 11:29:02 rpc -- common/autotest_common.sh@959 -- # uname 00:14:56.923 11:29:02 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:56.923 11:29:02 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58011 00:14:56.923 killing process with pid 58011 00:14:56.923 11:29:02 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:56.923 11:29:02 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:56.923 11:29:02 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58011' 00:14:56.923 11:29:02 rpc -- common/autotest_common.sh@973 -- # kill 58011 00:14:56.923 11:29:02 rpc -- common/autotest_common.sh@978 -- # wait 58011 00:14:59.456 00:14:59.456 real 0m5.149s 00:14:59.456 user 0m5.924s 00:14:59.456 sys 0m0.851s 00:14:59.456 11:29:04 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:59.456 11:29:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.456 ************************************ 00:14:59.456 END TEST rpc 00:14:59.456 ************************************ 00:14:59.456 11:29:04 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:14:59.456 11:29:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:59.456 11:29:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:59.456 11:29:04 -- common/autotest_common.sh@10 -- # set +x 00:14:59.456 ************************************ 00:14:59.456 START TEST skip_rpc 00:14:59.456 ************************************ 00:14:59.456 11:29:04 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:14:59.456 * Looking for test storage... 00:14:59.456 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:14:59.456 11:29:04 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:59.456 11:29:04 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:14:59.456 11:29:04 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:59.456 11:29:05 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:59.456 11:29:05 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:59.456 11:29:05 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:59.456 11:29:05 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:59.456 11:29:05 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:14:59.456 11:29:05 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:14:59.456 11:29:05 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:14:59.456 11:29:05 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:14:59.456 11:29:05 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:14:59.456 11:29:05 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:14:59.456 11:29:05 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:14:59.456 11:29:05 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:59.456 11:29:05 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:14:59.456 11:29:05 skip_rpc -- scripts/common.sh@345 -- # : 1 00:14:59.456 11:29:05 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:59.456 11:29:05 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:59.456 11:29:05 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:14:59.456 11:29:05 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:14:59.456 11:29:05 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:59.456 11:29:05 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:14:59.456 11:29:05 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:14:59.456 11:29:05 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:14:59.456 11:29:05 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:14:59.456 11:29:05 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:59.456 11:29:05 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:14:59.456 11:29:05 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:14:59.456 11:29:05 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:59.456 11:29:05 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:59.456 11:29:05 skip_rpc -- scripts/common.sh@368 -- # return 0 00:14:59.456 11:29:05 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:59.456 11:29:05 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:59.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.456 --rc genhtml_branch_coverage=1 00:14:59.456 --rc genhtml_function_coverage=1 00:14:59.456 --rc genhtml_legend=1 00:14:59.456 --rc geninfo_all_blocks=1 00:14:59.456 --rc geninfo_unexecuted_blocks=1 00:14:59.456 00:14:59.456 ' 00:14:59.456 11:29:05 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:59.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.456 --rc genhtml_branch_coverage=1 00:14:59.456 --rc genhtml_function_coverage=1 00:14:59.456 --rc genhtml_legend=1 00:14:59.456 --rc geninfo_all_blocks=1 00:14:59.456 --rc geninfo_unexecuted_blocks=1 00:14:59.456 00:14:59.456 ' 00:14:59.456 11:29:05 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:59.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.456 --rc genhtml_branch_coverage=1 00:14:59.456 --rc genhtml_function_coverage=1 00:14:59.456 --rc genhtml_legend=1 00:14:59.456 --rc geninfo_all_blocks=1 00:14:59.456 --rc geninfo_unexecuted_blocks=1 00:14:59.456 00:14:59.456 ' 00:14:59.456 11:29:05 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:59.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.456 --rc genhtml_branch_coverage=1 00:14:59.456 --rc genhtml_function_coverage=1 00:14:59.456 --rc genhtml_legend=1 00:14:59.456 --rc geninfo_all_blocks=1 00:14:59.456 --rc geninfo_unexecuted_blocks=1 00:14:59.456 00:14:59.456 ' 00:14:59.456 11:29:05 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:14:59.456 11:29:05 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:14:59.456 11:29:05 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:14:59.456 11:29:05 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:59.456 11:29:05 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:59.456 11:29:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.456 ************************************ 00:14:59.456 START TEST skip_rpc 00:14:59.456 ************************************ 00:14:59.456 11:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:14:59.456 11:29:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58240 00:14:59.456 11:29:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:14:59.456 11:29:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:14:59.456 11:29:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:14:59.715 [2024-11-20 11:29:05.221498] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:14:59.715 [2024-11-20 11:29:05.221884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58240 ] 00:14:59.715 [2024-11-20 11:29:05.412648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.999 [2024-11-20 11:29:05.574254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.267 11:29:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:15:05.267 11:29:10 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:15:05.267 11:29:10 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:15:05.267 11:29:10 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:05.267 11:29:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:05.267 11:29:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:05.267 11:29:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:05.267 11:29:10 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:15:05.267 11:29:10 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.267 11:29:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.267 11:29:10 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:05.267 11:29:10 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:15:05.267 11:29:10 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:05.267 11:29:10 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:05.267 11:29:10 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:05.267 11:29:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:15:05.267 11:29:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58240 00:15:05.267 11:29:10 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58240 ']' 00:15:05.267 11:29:10 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58240 00:15:05.267 11:29:10 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:15:05.267 11:29:10 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:05.267 11:29:10 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58240 00:15:05.267 11:29:10 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:05.267 11:29:10 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:05.267 11:29:10 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58240' 00:15:05.267 killing process with pid 58240 00:15:05.267 11:29:10 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58240 00:15:05.267 11:29:10 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58240 00:15:06.642 00:15:06.643 real 0m7.281s 00:15:06.643 user 0m6.703s 00:15:06.643 sys 0m0.473s 00:15:06.643 ************************************ 00:15:06.643 END TEST skip_rpc 00:15:06.643 ************************************ 00:15:06.643 11:29:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:06.643 11:29:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.643 11:29:12 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:15:06.643 11:29:12 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:06.643 11:29:12 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:06.643 11:29:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.901 ************************************ 00:15:06.901 START TEST skip_rpc_with_json 00:15:06.901 ************************************ 00:15:06.901 11:29:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:15:06.901 11:29:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:15:06.901 11:29:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58348 00:15:06.901 11:29:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:06.901 11:29:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58348 00:15:06.901 11:29:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:06.901 11:29:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58348 ']' 00:15:06.901 11:29:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.901 11:29:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:06.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.901 11:29:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.901 11:29:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:06.901 11:29:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:06.901 [2024-11-20 11:29:12.549403] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:15:06.901 [2024-11-20 11:29:12.549606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58348 ] 00:15:07.161 [2024-11-20 11:29:12.735661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.161 [2024-11-20 11:29:12.864700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.096 11:29:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:08.096 11:29:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:15:08.096 11:29:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:15:08.096 11:29:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.096 11:29:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:08.096 [2024-11-20 11:29:13.743943] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:15:08.096 request: 00:15:08.096 { 00:15:08.096 "trtype": "tcp", 00:15:08.096 "method": "nvmf_get_transports", 00:15:08.096 "req_id": 1 00:15:08.096 } 00:15:08.096 Got JSON-RPC error response 00:15:08.096 response: 00:15:08.096 { 00:15:08.096 "code": -19, 00:15:08.096 "message": "No such device" 00:15:08.096 } 00:15:08.096 11:29:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:08.096 11:29:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:15:08.096 11:29:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.096 11:29:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:08.096 [2024-11-20 11:29:13.756091] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:08.096 11:29:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.096 11:29:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:15:08.096 11:29:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.096 11:29:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:08.355 11:29:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.355 11:29:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:08.355 { 00:15:08.355 "subsystems": [ 00:15:08.355 { 00:15:08.355 "subsystem": "fsdev", 00:15:08.355 "config": [ 00:15:08.355 { 00:15:08.355 "method": "fsdev_set_opts", 00:15:08.355 "params": { 00:15:08.355 "fsdev_io_pool_size": 65535, 00:15:08.355 "fsdev_io_cache_size": 256 00:15:08.355 } 00:15:08.355 } 00:15:08.355 ] 00:15:08.355 }, 00:15:08.355 { 00:15:08.355 "subsystem": "keyring", 00:15:08.355 "config": [] 00:15:08.355 }, 00:15:08.355 { 00:15:08.355 "subsystem": "iobuf", 00:15:08.355 "config": [ 00:15:08.355 { 00:15:08.355 "method": "iobuf_set_options", 00:15:08.355 "params": { 00:15:08.355 "small_pool_count": 8192, 00:15:08.355 "large_pool_count": 1024, 00:15:08.355 "small_bufsize": 8192, 00:15:08.355 "large_bufsize": 135168, 00:15:08.355 "enable_numa": false 00:15:08.355 } 00:15:08.355 } 00:15:08.355 ] 00:15:08.355 }, 00:15:08.355 { 00:15:08.355 "subsystem": "sock", 00:15:08.355 "config": [ 00:15:08.355 { 00:15:08.355 "method": "sock_set_default_impl", 00:15:08.355 "params": { 00:15:08.355 "impl_name": "posix" 00:15:08.355 } 00:15:08.355 }, 00:15:08.355 { 00:15:08.355 "method": "sock_impl_set_options", 00:15:08.355 "params": { 00:15:08.355 "impl_name": "ssl", 00:15:08.355 "recv_buf_size": 4096, 00:15:08.355 "send_buf_size": 4096, 00:15:08.355 "enable_recv_pipe": true, 00:15:08.355 "enable_quickack": false, 00:15:08.355 "enable_placement_id": 0, 00:15:08.355 "enable_zerocopy_send_server": true, 00:15:08.355 "enable_zerocopy_send_client": false, 00:15:08.355 "zerocopy_threshold": 0, 00:15:08.355 "tls_version": 0, 00:15:08.355 "enable_ktls": false 00:15:08.355 } 00:15:08.355 }, 00:15:08.355 { 00:15:08.355 "method": "sock_impl_set_options", 00:15:08.355 "params": { 00:15:08.355 "impl_name": "posix", 00:15:08.355 "recv_buf_size": 2097152, 00:15:08.355 "send_buf_size": 2097152, 00:15:08.355 "enable_recv_pipe": true, 00:15:08.355 "enable_quickack": false, 00:15:08.355 "enable_placement_id": 0, 00:15:08.355 "enable_zerocopy_send_server": true, 00:15:08.355 "enable_zerocopy_send_client": false, 00:15:08.355 "zerocopy_threshold": 0, 00:15:08.355 "tls_version": 0, 00:15:08.355 "enable_ktls": false 00:15:08.355 } 00:15:08.355 } 00:15:08.355 ] 00:15:08.355 }, 00:15:08.355 { 00:15:08.355 "subsystem": "vmd", 00:15:08.355 "config": [] 00:15:08.355 }, 00:15:08.355 { 00:15:08.355 "subsystem": "accel", 00:15:08.355 "config": [ 00:15:08.355 { 00:15:08.355 "method": "accel_set_options", 00:15:08.355 "params": { 00:15:08.355 "small_cache_size": 128, 00:15:08.355 "large_cache_size": 16, 00:15:08.355 "task_count": 2048, 00:15:08.355 "sequence_count": 2048, 00:15:08.355 "buf_count": 2048 00:15:08.355 } 00:15:08.355 } 00:15:08.355 ] 00:15:08.355 }, 00:15:08.355 { 00:15:08.355 "subsystem": "bdev", 00:15:08.355 "config": [ 00:15:08.355 { 00:15:08.355 "method": "bdev_set_options", 00:15:08.355 "params": { 00:15:08.355 "bdev_io_pool_size": 65535, 00:15:08.355 "bdev_io_cache_size": 256, 00:15:08.355 "bdev_auto_examine": true, 00:15:08.355 "iobuf_small_cache_size": 128, 00:15:08.355 "iobuf_large_cache_size": 16 00:15:08.355 } 00:15:08.355 }, 00:15:08.355 { 00:15:08.355 "method": "bdev_raid_set_options", 00:15:08.355 "params": { 00:15:08.355 "process_window_size_kb": 1024, 00:15:08.355 "process_max_bandwidth_mb_sec": 0 00:15:08.355 } 00:15:08.355 }, 00:15:08.355 { 00:15:08.355 "method": "bdev_iscsi_set_options", 00:15:08.355 "params": { 00:15:08.355 "timeout_sec": 30 00:15:08.355 } 00:15:08.355 }, 00:15:08.355 { 00:15:08.355 "method": "bdev_nvme_set_options", 00:15:08.355 "params": { 00:15:08.355 "action_on_timeout": "none", 00:15:08.355 "timeout_us": 0, 00:15:08.355 "timeout_admin_us": 0, 00:15:08.355 "keep_alive_timeout_ms": 10000, 00:15:08.355 "arbitration_burst": 0, 00:15:08.355 "low_priority_weight": 0, 00:15:08.355 "medium_priority_weight": 0, 00:15:08.355 "high_priority_weight": 0, 00:15:08.355 "nvme_adminq_poll_period_us": 10000, 00:15:08.355 "nvme_ioq_poll_period_us": 0, 00:15:08.355 "io_queue_requests": 0, 00:15:08.355 "delay_cmd_submit": true, 00:15:08.355 "transport_retry_count": 4, 00:15:08.355 "bdev_retry_count": 3, 00:15:08.355 "transport_ack_timeout": 0, 00:15:08.355 "ctrlr_loss_timeout_sec": 0, 00:15:08.355 "reconnect_delay_sec": 0, 00:15:08.355 "fast_io_fail_timeout_sec": 0, 00:15:08.355 "disable_auto_failback": false, 00:15:08.355 "generate_uuids": false, 00:15:08.355 "transport_tos": 0, 00:15:08.355 "nvme_error_stat": false, 00:15:08.355 "rdma_srq_size": 0, 00:15:08.355 "io_path_stat": false, 00:15:08.355 "allow_accel_sequence": false, 00:15:08.355 "rdma_max_cq_size": 0, 00:15:08.355 "rdma_cm_event_timeout_ms": 0, 00:15:08.355 "dhchap_digests": [ 00:15:08.355 "sha256", 00:15:08.355 "sha384", 00:15:08.355 "sha512" 00:15:08.355 ], 00:15:08.355 "dhchap_dhgroups": [ 00:15:08.355 "null", 00:15:08.355 "ffdhe2048", 00:15:08.355 "ffdhe3072", 00:15:08.355 "ffdhe4096", 00:15:08.355 "ffdhe6144", 00:15:08.355 "ffdhe8192" 00:15:08.355 ] 00:15:08.355 } 00:15:08.355 }, 00:15:08.355 { 00:15:08.355 "method": "bdev_nvme_set_hotplug", 00:15:08.355 "params": { 00:15:08.355 "period_us": 100000, 00:15:08.355 "enable": false 00:15:08.355 } 00:15:08.355 }, 00:15:08.355 { 00:15:08.355 "method": "bdev_wait_for_examine" 00:15:08.355 } 00:15:08.355 ] 00:15:08.355 }, 00:15:08.355 { 00:15:08.355 "subsystem": "scsi", 00:15:08.355 "config": null 00:15:08.355 }, 00:15:08.355 { 00:15:08.355 "subsystem": "scheduler", 00:15:08.355 "config": [ 00:15:08.355 { 00:15:08.355 "method": "framework_set_scheduler", 00:15:08.355 "params": { 00:15:08.355 "name": "static" 00:15:08.355 } 00:15:08.355 } 00:15:08.355 ] 00:15:08.355 }, 00:15:08.355 { 00:15:08.355 "subsystem": "vhost_scsi", 00:15:08.355 "config": [] 00:15:08.355 }, 00:15:08.355 { 00:15:08.355 "subsystem": "vhost_blk", 00:15:08.355 "config": [] 00:15:08.356 }, 00:15:08.356 { 00:15:08.356 "subsystem": "ublk", 00:15:08.356 "config": [] 00:15:08.356 }, 00:15:08.356 { 00:15:08.356 "subsystem": "nbd", 00:15:08.356 "config": [] 00:15:08.356 }, 00:15:08.356 { 00:15:08.356 "subsystem": "nvmf", 00:15:08.356 "config": [ 00:15:08.356 { 00:15:08.356 "method": "nvmf_set_config", 00:15:08.356 "params": { 00:15:08.356 "discovery_filter": "match_any", 00:15:08.356 "admin_cmd_passthru": { 00:15:08.356 "identify_ctrlr": false 00:15:08.356 }, 00:15:08.356 "dhchap_digests": [ 00:15:08.356 "sha256", 00:15:08.356 "sha384", 00:15:08.356 "sha512" 00:15:08.356 ], 00:15:08.356 "dhchap_dhgroups": [ 00:15:08.356 "null", 00:15:08.356 "ffdhe2048", 00:15:08.356 "ffdhe3072", 00:15:08.356 "ffdhe4096", 00:15:08.356 "ffdhe6144", 00:15:08.356 "ffdhe8192" 00:15:08.356 ] 00:15:08.356 } 00:15:08.356 }, 00:15:08.356 { 00:15:08.356 "method": "nvmf_set_max_subsystems", 00:15:08.356 "params": { 00:15:08.356 "max_subsystems": 1024 00:15:08.356 } 00:15:08.356 }, 00:15:08.356 { 00:15:08.356 "method": "nvmf_set_crdt", 00:15:08.356 "params": { 00:15:08.356 "crdt1": 0, 00:15:08.356 "crdt2": 0, 00:15:08.356 "crdt3": 0 00:15:08.356 } 00:15:08.356 }, 00:15:08.356 { 00:15:08.356 "method": "nvmf_create_transport", 00:15:08.356 "params": { 00:15:08.356 "trtype": "TCP", 00:15:08.356 "max_queue_depth": 128, 00:15:08.356 "max_io_qpairs_per_ctrlr": 127, 00:15:08.356 "in_capsule_data_size": 4096, 00:15:08.356 "max_io_size": 131072, 00:15:08.356 "io_unit_size": 131072, 00:15:08.356 "max_aq_depth": 128, 00:15:08.356 "num_shared_buffers": 511, 00:15:08.356 "buf_cache_size": 4294967295, 00:15:08.356 "dif_insert_or_strip": false, 00:15:08.356 "zcopy": false, 00:15:08.356 "c2h_success": true, 00:15:08.356 "sock_priority": 0, 00:15:08.356 "abort_timeout_sec": 1, 00:15:08.356 "ack_timeout": 0, 00:15:08.356 "data_wr_pool_size": 0 00:15:08.356 } 00:15:08.356 } 00:15:08.356 ] 00:15:08.356 }, 00:15:08.356 { 00:15:08.356 "subsystem": "iscsi", 00:15:08.356 "config": [ 00:15:08.356 { 00:15:08.356 "method": "iscsi_set_options", 00:15:08.356 "params": { 00:15:08.356 "node_base": "iqn.2016-06.io.spdk", 00:15:08.356 "max_sessions": 128, 00:15:08.356 "max_connections_per_session": 2, 00:15:08.356 "max_queue_depth": 64, 00:15:08.356 "default_time2wait": 2, 00:15:08.356 "default_time2retain": 20, 00:15:08.356 "first_burst_length": 8192, 00:15:08.356 "immediate_data": true, 00:15:08.356 "allow_duplicated_isid": false, 00:15:08.356 "error_recovery_level": 0, 00:15:08.356 "nop_timeout": 60, 00:15:08.356 "nop_in_interval": 30, 00:15:08.356 "disable_chap": false, 00:15:08.356 "require_chap": false, 00:15:08.356 "mutual_chap": false, 00:15:08.356 "chap_group": 0, 00:15:08.356 "max_large_datain_per_connection": 64, 00:15:08.356 "max_r2t_per_connection": 4, 00:15:08.356 "pdu_pool_size": 36864, 00:15:08.356 "immediate_data_pool_size": 16384, 00:15:08.356 "data_out_pool_size": 2048 00:15:08.356 } 00:15:08.356 } 00:15:08.356 ] 00:15:08.356 } 00:15:08.356 ] 00:15:08.356 } 00:15:08.356 11:29:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:08.356 11:29:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58348 00:15:08.356 11:29:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58348 ']' 00:15:08.356 11:29:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58348 00:15:08.356 11:29:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:15:08.356 11:29:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:08.356 11:29:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58348 00:15:08.356 killing process with pid 58348 00:15:08.356 11:29:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:08.356 11:29:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:08.356 11:29:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58348' 00:15:08.356 11:29:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58348 00:15:08.356 11:29:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58348 00:15:10.889 11:29:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58400 00:15:10.889 11:29:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:10.889 11:29:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:15:16.152 11:29:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58400 00:15:16.152 11:29:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58400 ']' 00:15:16.152 11:29:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58400 00:15:16.152 11:29:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:15:16.152 11:29:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:16.152 11:29:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58400 00:15:16.152 killing process with pid 58400 00:15:16.152 11:29:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:16.152 11:29:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:16.152 11:29:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58400' 00:15:16.152 11:29:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58400 00:15:16.152 11:29:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58400 00:15:18.055 11:29:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:15:18.055 11:29:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:15:18.055 ************************************ 00:15:18.055 END TEST skip_rpc_with_json 00:15:18.055 ************************************ 00:15:18.055 00:15:18.055 real 0m11.110s 00:15:18.055 user 0m10.412s 00:15:18.055 sys 0m1.056s 00:15:18.055 11:29:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:18.055 11:29:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:18.055 11:29:23 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:15:18.055 11:29:23 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:18.055 11:29:23 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:18.055 11:29:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.055 ************************************ 00:15:18.055 START TEST skip_rpc_with_delay 00:15:18.055 ************************************ 00:15:18.055 11:29:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:15:18.055 11:29:23 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:15:18.055 11:29:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:15:18.055 11:29:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:15:18.055 11:29:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:18.055 11:29:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:18.055 11:29:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:18.055 11:29:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:18.055 11:29:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:18.055 11:29:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:18.055 11:29:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:18.055 11:29:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:15:18.055 11:29:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:15:18.055 [2024-11-20 11:29:23.703203] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:15:18.055 11:29:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:15:18.055 11:29:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:18.055 11:29:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:18.055 11:29:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:18.055 00:15:18.055 real 0m0.191s 00:15:18.055 user 0m0.102s 00:15:18.055 sys 0m0.086s 00:15:18.055 11:29:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:18.055 11:29:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:15:18.055 ************************************ 00:15:18.055 END TEST skip_rpc_with_delay 00:15:18.055 ************************************ 00:15:18.055 11:29:23 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:15:18.055 11:29:23 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:15:18.055 11:29:23 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:15:18.055 11:29:23 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:18.055 11:29:23 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:18.055 11:29:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.314 ************************************ 00:15:18.314 START TEST exit_on_failed_rpc_init 00:15:18.314 ************************************ 00:15:18.314 11:29:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:15:18.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.314 11:29:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58528 00:15:18.314 11:29:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58528 00:15:18.314 11:29:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58528 ']' 00:15:18.314 11:29:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:18.314 11:29:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.314 11:29:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:18.314 11:29:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.314 11:29:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:18.314 11:29:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:15:18.314 [2024-11-20 11:29:23.929881] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:15:18.314 [2024-11-20 11:29:23.930039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58528 ] 00:15:18.573 [2024-11-20 11:29:24.111757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.573 [2024-11-20 11:29:24.271210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.508 11:29:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:19.508 11:29:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:15:19.508 11:29:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:19.508 11:29:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:15:19.508 11:29:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:15:19.508 11:29:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:15:19.508 11:29:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:19.508 11:29:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:19.508 11:29:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:19.508 11:29:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:19.508 11:29:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:19.508 11:29:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:19.508 11:29:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:19.508 11:29:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:15:19.508 11:29:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:15:19.766 [2024-11-20 11:29:25.312056] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:15:19.766 [2024-11-20 11:29:25.312261] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58557 ] 00:15:19.766 [2024-11-20 11:29:25.507124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.025 [2024-11-20 11:29:25.665182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.025 [2024-11-20 11:29:25.665326] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:15:20.025 [2024-11-20 11:29:25.665354] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:15:20.025 [2024-11-20 11:29:25.665382] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:20.283 11:29:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:15:20.283 11:29:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:20.283 11:29:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:15:20.283 11:29:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:15:20.283 11:29:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:15:20.283 11:29:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:20.283 11:29:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:20.283 11:29:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58528 00:15:20.283 11:29:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58528 ']' 00:15:20.283 11:29:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58528 00:15:20.283 11:29:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:15:20.283 11:29:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:20.283 11:29:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58528 00:15:20.540 killing process with pid 58528 00:15:20.540 11:29:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:20.540 11:29:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:20.540 11:29:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58528' 00:15:20.540 11:29:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58528 00:15:20.540 11:29:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58528 00:15:23.071 ************************************ 00:15:23.071 END TEST exit_on_failed_rpc_init 00:15:23.071 ************************************ 00:15:23.071 00:15:23.071 real 0m4.450s 00:15:23.071 user 0m5.035s 00:15:23.071 sys 0m0.694s 00:15:23.071 11:29:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:23.071 11:29:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:15:23.071 11:29:28 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:23.071 ************************************ 00:15:23.071 END TEST skip_rpc 00:15:23.071 ************************************ 00:15:23.071 00:15:23.071 real 0m23.445s 00:15:23.071 user 0m22.447s 00:15:23.071 sys 0m2.522s 00:15:23.071 11:29:28 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:23.071 11:29:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:23.071 11:29:28 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:15:23.071 11:29:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:23.071 11:29:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:23.071 11:29:28 -- common/autotest_common.sh@10 -- # set +x 00:15:23.071 ************************************ 00:15:23.071 START TEST rpc_client 00:15:23.071 ************************************ 00:15:23.071 11:29:28 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:15:23.071 * Looking for test storage... 00:15:23.071 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:15:23.071 11:29:28 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:23.071 11:29:28 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:15:23.071 11:29:28 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:23.071 11:29:28 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:23.071 11:29:28 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:23.071 11:29:28 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:23.071 11:29:28 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:23.071 11:29:28 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:15:23.071 11:29:28 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:15:23.071 11:29:28 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:15:23.071 11:29:28 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:15:23.072 11:29:28 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:15:23.072 11:29:28 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:15:23.072 11:29:28 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:15:23.072 11:29:28 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:23.072 11:29:28 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:15:23.072 11:29:28 rpc_client -- scripts/common.sh@345 -- # : 1 00:15:23.072 11:29:28 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:23.072 11:29:28 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:23.072 11:29:28 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:15:23.072 11:29:28 rpc_client -- scripts/common.sh@353 -- # local d=1 00:15:23.072 11:29:28 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:23.072 11:29:28 rpc_client -- scripts/common.sh@355 -- # echo 1 00:15:23.072 11:29:28 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:15:23.072 11:29:28 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:15:23.072 11:29:28 rpc_client -- scripts/common.sh@353 -- # local d=2 00:15:23.072 11:29:28 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:23.072 11:29:28 rpc_client -- scripts/common.sh@355 -- # echo 2 00:15:23.072 11:29:28 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:15:23.072 11:29:28 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:23.072 11:29:28 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:23.072 11:29:28 rpc_client -- scripts/common.sh@368 -- # return 0 00:15:23.072 11:29:28 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:23.072 11:29:28 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:23.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.072 --rc genhtml_branch_coverage=1 00:15:23.072 --rc genhtml_function_coverage=1 00:15:23.072 --rc genhtml_legend=1 00:15:23.072 --rc geninfo_all_blocks=1 00:15:23.072 --rc geninfo_unexecuted_blocks=1 00:15:23.072 00:15:23.072 ' 00:15:23.072 11:29:28 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:23.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.072 --rc genhtml_branch_coverage=1 00:15:23.072 --rc genhtml_function_coverage=1 00:15:23.072 --rc genhtml_legend=1 00:15:23.072 --rc geninfo_all_blocks=1 00:15:23.072 --rc geninfo_unexecuted_blocks=1 00:15:23.072 00:15:23.072 ' 00:15:23.072 11:29:28 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:23.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.072 --rc genhtml_branch_coverage=1 00:15:23.072 --rc genhtml_function_coverage=1 00:15:23.072 --rc genhtml_legend=1 00:15:23.072 --rc geninfo_all_blocks=1 00:15:23.072 --rc geninfo_unexecuted_blocks=1 00:15:23.072 00:15:23.072 ' 00:15:23.072 11:29:28 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:23.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.072 --rc genhtml_branch_coverage=1 00:15:23.072 --rc genhtml_function_coverage=1 00:15:23.072 --rc genhtml_legend=1 00:15:23.072 --rc geninfo_all_blocks=1 00:15:23.072 --rc geninfo_unexecuted_blocks=1 00:15:23.072 00:15:23.072 ' 00:15:23.072 11:29:28 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:15:23.072 OK 00:15:23.072 11:29:28 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:15:23.072 00:15:23.072 real 0m0.258s 00:15:23.072 user 0m0.151s 00:15:23.072 sys 0m0.117s 00:15:23.072 11:29:28 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:23.072 11:29:28 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:15:23.072 ************************************ 00:15:23.072 END TEST rpc_client 00:15:23.072 ************************************ 00:15:23.072 11:29:28 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:15:23.072 11:29:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:23.072 11:29:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:23.072 11:29:28 -- common/autotest_common.sh@10 -- # set +x 00:15:23.072 ************************************ 00:15:23.072 START TEST json_config 00:15:23.072 ************************************ 00:15:23.072 11:29:28 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:15:23.072 11:29:28 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:23.072 11:29:28 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:23.072 11:29:28 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:15:23.072 11:29:28 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:23.072 11:29:28 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:23.072 11:29:28 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:23.072 11:29:28 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:23.072 11:29:28 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:15:23.072 11:29:28 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:15:23.072 11:29:28 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:15:23.072 11:29:28 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:15:23.072 11:29:28 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:15:23.072 11:29:28 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:15:23.072 11:29:28 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:15:23.072 11:29:28 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:23.072 11:29:28 json_config -- scripts/common.sh@344 -- # case "$op" in 00:15:23.072 11:29:28 json_config -- scripts/common.sh@345 -- # : 1 00:15:23.072 11:29:28 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:23.072 11:29:28 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:23.072 11:29:28 json_config -- scripts/common.sh@365 -- # decimal 1 00:15:23.072 11:29:28 json_config -- scripts/common.sh@353 -- # local d=1 00:15:23.072 11:29:28 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:23.072 11:29:28 json_config -- scripts/common.sh@355 -- # echo 1 00:15:23.072 11:29:28 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:15:23.072 11:29:28 json_config -- scripts/common.sh@366 -- # decimal 2 00:15:23.072 11:29:28 json_config -- scripts/common.sh@353 -- # local d=2 00:15:23.072 11:29:28 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:23.072 11:29:28 json_config -- scripts/common.sh@355 -- # echo 2 00:15:23.072 11:29:28 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:15:23.072 11:29:28 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:23.072 11:29:28 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:23.072 11:29:28 json_config -- scripts/common.sh@368 -- # return 0 00:15:23.072 11:29:28 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:23.072 11:29:28 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:23.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.072 --rc genhtml_branch_coverage=1 00:15:23.072 --rc genhtml_function_coverage=1 00:15:23.072 --rc genhtml_legend=1 00:15:23.072 --rc geninfo_all_blocks=1 00:15:23.072 --rc geninfo_unexecuted_blocks=1 00:15:23.072 00:15:23.072 ' 00:15:23.072 11:29:28 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:23.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.072 --rc genhtml_branch_coverage=1 00:15:23.073 --rc genhtml_function_coverage=1 00:15:23.073 --rc genhtml_legend=1 00:15:23.073 --rc geninfo_all_blocks=1 00:15:23.073 --rc geninfo_unexecuted_blocks=1 00:15:23.073 00:15:23.073 ' 00:15:23.073 11:29:28 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:23.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.073 --rc genhtml_branch_coverage=1 00:15:23.073 --rc genhtml_function_coverage=1 00:15:23.073 --rc genhtml_legend=1 00:15:23.073 --rc geninfo_all_blocks=1 00:15:23.073 --rc geninfo_unexecuted_blocks=1 00:15:23.073 00:15:23.073 ' 00:15:23.073 11:29:28 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:23.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.073 --rc genhtml_branch_coverage=1 00:15:23.073 --rc genhtml_function_coverage=1 00:15:23.073 --rc genhtml_legend=1 00:15:23.073 --rc geninfo_all_blocks=1 00:15:23.073 --rc geninfo_unexecuted_blocks=1 00:15:23.073 00:15:23.073 ' 00:15:23.073 11:29:28 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:23.073 11:29:28 json_config -- nvmf/common.sh@7 -- # uname -s 00:15:23.073 11:29:28 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:23.073 11:29:28 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:23.073 11:29:28 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:23.073 11:29:28 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:23.073 11:29:28 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:23.073 11:29:28 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:23.073 11:29:28 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:23.073 11:29:28 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:23.073 11:29:28 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:23.073 11:29:28 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:23.332 11:29:28 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:11e88e29-ee60-469d-aa56-4628a056478e 00:15:23.332 11:29:28 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=11e88e29-ee60-469d-aa56-4628a056478e 00:15:23.332 11:29:28 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:23.332 11:29:28 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:23.332 11:29:28 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:23.332 11:29:28 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:23.332 11:29:28 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:23.332 11:29:28 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:15:23.332 11:29:28 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:23.332 11:29:28 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:23.332 11:29:28 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:23.332 11:29:28 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.332 11:29:28 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.332 11:29:28 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.332 11:29:28 json_config -- paths/export.sh@5 -- # export PATH 00:15:23.332 11:29:28 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.332 11:29:28 json_config -- nvmf/common.sh@51 -- # : 0 00:15:23.332 11:29:28 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:23.332 11:29:28 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:23.332 11:29:28 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:23.332 11:29:28 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:23.332 11:29:28 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:23.332 11:29:28 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:23.332 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:23.332 11:29:28 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:23.332 11:29:28 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:23.332 11:29:28 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:23.332 11:29:28 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:15:23.332 11:29:28 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:15:23.332 11:29:28 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:15:23.332 11:29:28 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:15:23.332 11:29:28 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:15:23.332 WARNING: No tests are enabled so not running JSON configuration tests 00:15:23.332 11:29:28 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:15:23.332 11:29:28 json_config -- json_config/json_config.sh@28 -- # exit 0 00:15:23.332 00:15:23.332 real 0m0.193s 00:15:23.332 user 0m0.116s 00:15:23.332 sys 0m0.069s 00:15:23.332 ************************************ 00:15:23.332 END TEST json_config 00:15:23.332 11:29:28 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:23.332 11:29:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:23.332 ************************************ 00:15:23.332 11:29:28 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:15:23.332 11:29:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:23.332 11:29:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:23.332 11:29:28 -- common/autotest_common.sh@10 -- # set +x 00:15:23.332 ************************************ 00:15:23.332 START TEST json_config_extra_key 00:15:23.332 ************************************ 00:15:23.332 11:29:28 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:15:23.332 11:29:28 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:23.332 11:29:28 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:15:23.332 11:29:28 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:23.332 11:29:29 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:23.332 11:29:29 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:23.332 11:29:29 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:23.332 11:29:29 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:23.332 11:29:29 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:15:23.332 11:29:29 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:15:23.332 11:29:29 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:15:23.332 11:29:29 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:15:23.332 11:29:29 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:15:23.332 11:29:29 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:15:23.332 11:29:29 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:15:23.332 11:29:29 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:23.332 11:29:29 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:15:23.332 11:29:29 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:15:23.332 11:29:29 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:23.332 11:29:29 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:23.332 11:29:29 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:15:23.332 11:29:29 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:15:23.332 11:29:29 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:23.332 11:29:29 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:15:23.332 11:29:29 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:15:23.332 11:29:29 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:15:23.332 11:29:29 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:15:23.332 11:29:29 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:23.332 11:29:29 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:15:23.332 11:29:29 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:15:23.332 11:29:29 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:23.332 11:29:29 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:23.332 11:29:29 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:15:23.332 11:29:29 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:23.332 11:29:29 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:23.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.332 --rc genhtml_branch_coverage=1 00:15:23.332 --rc genhtml_function_coverage=1 00:15:23.332 --rc genhtml_legend=1 00:15:23.332 --rc geninfo_all_blocks=1 00:15:23.332 --rc geninfo_unexecuted_blocks=1 00:15:23.332 00:15:23.332 ' 00:15:23.332 11:29:29 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:23.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.332 --rc genhtml_branch_coverage=1 00:15:23.332 --rc genhtml_function_coverage=1 00:15:23.332 --rc genhtml_legend=1 00:15:23.332 --rc geninfo_all_blocks=1 00:15:23.332 --rc geninfo_unexecuted_blocks=1 00:15:23.332 00:15:23.332 ' 00:15:23.332 11:29:29 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:23.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.332 --rc genhtml_branch_coverage=1 00:15:23.332 --rc genhtml_function_coverage=1 00:15:23.332 --rc genhtml_legend=1 00:15:23.332 --rc geninfo_all_blocks=1 00:15:23.332 --rc geninfo_unexecuted_blocks=1 00:15:23.332 00:15:23.332 ' 00:15:23.332 11:29:29 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:23.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.332 --rc genhtml_branch_coverage=1 00:15:23.332 --rc genhtml_function_coverage=1 00:15:23.332 --rc genhtml_legend=1 00:15:23.332 --rc geninfo_all_blocks=1 00:15:23.332 --rc geninfo_unexecuted_blocks=1 00:15:23.332 00:15:23.332 ' 00:15:23.332 11:29:29 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:23.332 11:29:29 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:15:23.332 11:29:29 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:23.332 11:29:29 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:23.332 11:29:29 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:23.332 11:29:29 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:23.332 11:29:29 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:23.332 11:29:29 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:23.332 11:29:29 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:23.333 11:29:29 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:23.333 11:29:29 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:23.333 11:29:29 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:23.333 11:29:29 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:11e88e29-ee60-469d-aa56-4628a056478e 00:15:23.333 11:29:29 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=11e88e29-ee60-469d-aa56-4628a056478e 00:15:23.333 11:29:29 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:23.333 11:29:29 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:23.333 11:29:29 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:23.333 11:29:29 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:23.333 11:29:29 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:23.333 11:29:29 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:15:23.333 11:29:29 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:23.333 11:29:29 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:23.333 11:29:29 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:23.333 11:29:29 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.333 11:29:29 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.333 11:29:29 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.333 11:29:29 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:15:23.333 11:29:29 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.333 11:29:29 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:15:23.333 11:29:29 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:23.333 11:29:29 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:23.333 11:29:29 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:23.333 11:29:29 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:23.333 11:29:29 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:23.333 11:29:29 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:23.333 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:23.333 11:29:29 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:23.333 11:29:29 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:23.333 11:29:29 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:23.333 11:29:29 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:15:23.333 11:29:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:15:23.333 11:29:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:15:23.333 11:29:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:15:23.333 11:29:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:15:23.333 11:29:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:15:23.333 11:29:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:15:23.333 11:29:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:15:23.333 11:29:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:15:23.333 11:29:29 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:15:23.333 11:29:29 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:15:23.333 INFO: launching applications... 00:15:23.333 Waiting for target to run... 00:15:23.333 11:29:29 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:15:23.333 11:29:29 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:15:23.333 11:29:29 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:15:23.333 11:29:29 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:15:23.333 11:29:29 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:15:23.333 11:29:29 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:15:23.333 11:29:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:23.333 11:29:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:23.333 11:29:29 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58767 00:15:23.333 11:29:29 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:15:23.333 11:29:29 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:15:23.333 11:29:29 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58767 /var/tmp/spdk_tgt.sock 00:15:23.333 11:29:29 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58767 ']' 00:15:23.333 11:29:29 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:15:23.333 11:29:29 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:23.333 11:29:29 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:15:23.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:15:23.333 11:29:29 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:23.333 11:29:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:15:23.591 [2024-11-20 11:29:29.207382] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:15:23.591 [2024-11-20 11:29:29.207883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58767 ] 00:15:24.158 [2024-11-20 11:29:29.697975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.158 [2024-11-20 11:29:29.864286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.093 11:29:30 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:25.093 11:29:30 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:15:25.093 11:29:30 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:15:25.093 00:15:25.093 INFO: shutting down applications... 00:15:25.093 11:29:30 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:15:25.093 11:29:30 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:15:25.093 11:29:30 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:15:25.093 11:29:30 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:15:25.093 11:29:30 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58767 ]] 00:15:25.093 11:29:30 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58767 00:15:25.093 11:29:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:15:25.093 11:29:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:25.093 11:29:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58767 00:15:25.093 11:29:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:15:25.352 11:29:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:15:25.352 11:29:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:25.352 11:29:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58767 00:15:25.352 11:29:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:15:25.920 11:29:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:15:25.920 11:29:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:25.921 11:29:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58767 00:15:25.921 11:29:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:15:26.490 11:29:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:15:26.490 11:29:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:26.490 11:29:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58767 00:15:26.490 11:29:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:15:27.057 11:29:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:15:27.057 11:29:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:27.057 11:29:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58767 00:15:27.057 11:29:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:15:27.315 11:29:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:15:27.315 11:29:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:27.315 11:29:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58767 00:15:27.315 11:29:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:15:27.882 11:29:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:15:27.882 11:29:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:27.882 11:29:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58767 00:15:27.882 11:29:33 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:15:27.882 11:29:33 json_config_extra_key -- json_config/common.sh@43 -- # break 00:15:27.882 SPDK target shutdown done 00:15:27.882 Success 00:15:27.882 11:29:33 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:15:27.882 11:29:33 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:15:27.882 11:29:33 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:15:27.882 00:15:27.882 real 0m4.673s 00:15:27.882 user 0m4.098s 00:15:27.882 sys 0m0.691s 00:15:27.882 ************************************ 00:15:27.882 END TEST json_config_extra_key 00:15:27.882 ************************************ 00:15:27.882 11:29:33 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:27.882 11:29:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:15:27.882 11:29:33 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:15:27.882 11:29:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:27.882 11:29:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:27.882 11:29:33 -- common/autotest_common.sh@10 -- # set +x 00:15:27.882 ************************************ 00:15:27.882 START TEST alias_rpc 00:15:27.882 ************************************ 00:15:27.882 11:29:33 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:15:28.141 * Looking for test storage... 00:15:28.141 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:15:28.141 11:29:33 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:28.141 11:29:33 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:28.141 11:29:33 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:15:28.141 11:29:33 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:28.141 11:29:33 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:28.141 11:29:33 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:28.141 11:29:33 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:28.141 11:29:33 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:15:28.141 11:29:33 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:15:28.141 11:29:33 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:15:28.141 11:29:33 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:15:28.141 11:29:33 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:15:28.141 11:29:33 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:15:28.141 11:29:33 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:15:28.141 11:29:33 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:28.141 11:29:33 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:15:28.141 11:29:33 alias_rpc -- scripts/common.sh@345 -- # : 1 00:15:28.141 11:29:33 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:28.141 11:29:33 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:28.141 11:29:33 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:15:28.141 11:29:33 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:15:28.141 11:29:33 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:28.141 11:29:33 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:15:28.141 11:29:33 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:28.141 11:29:33 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:15:28.141 11:29:33 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:15:28.141 11:29:33 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:28.141 11:29:33 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:15:28.141 11:29:33 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:28.141 11:29:33 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:28.141 11:29:33 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:28.141 11:29:33 alias_rpc -- scripts/common.sh@368 -- # return 0 00:15:28.141 11:29:33 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:28.141 11:29:33 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:28.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.141 --rc genhtml_branch_coverage=1 00:15:28.141 --rc genhtml_function_coverage=1 00:15:28.141 --rc genhtml_legend=1 00:15:28.141 --rc geninfo_all_blocks=1 00:15:28.141 --rc geninfo_unexecuted_blocks=1 00:15:28.141 00:15:28.141 ' 00:15:28.141 11:29:33 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:28.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.141 --rc genhtml_branch_coverage=1 00:15:28.141 --rc genhtml_function_coverage=1 00:15:28.141 --rc genhtml_legend=1 00:15:28.141 --rc geninfo_all_blocks=1 00:15:28.141 --rc geninfo_unexecuted_blocks=1 00:15:28.141 00:15:28.141 ' 00:15:28.141 11:29:33 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:28.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.141 --rc genhtml_branch_coverage=1 00:15:28.141 --rc genhtml_function_coverage=1 00:15:28.142 --rc genhtml_legend=1 00:15:28.142 --rc geninfo_all_blocks=1 00:15:28.142 --rc geninfo_unexecuted_blocks=1 00:15:28.142 00:15:28.142 ' 00:15:28.142 11:29:33 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:28.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.142 --rc genhtml_branch_coverage=1 00:15:28.142 --rc genhtml_function_coverage=1 00:15:28.142 --rc genhtml_legend=1 00:15:28.142 --rc geninfo_all_blocks=1 00:15:28.142 --rc geninfo_unexecuted_blocks=1 00:15:28.142 00:15:28.142 ' 00:15:28.142 11:29:33 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:15:28.142 11:29:33 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58873 00:15:28.142 11:29:33 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:28.142 11:29:33 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58873 00:15:28.142 11:29:33 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58873 ']' 00:15:28.142 11:29:33 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.142 11:29:33 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:28.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.142 11:29:33 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.142 11:29:33 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:28.142 11:29:33 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.401 [2024-11-20 11:29:33.950145] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:15:28.401 [2024-11-20 11:29:33.950657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58873 ] 00:15:28.401 [2024-11-20 11:29:34.131258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.659 [2024-11-20 11:29:34.266951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.594 11:29:35 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:29.594 11:29:35 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:29.594 11:29:35 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:15:29.855 11:29:35 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58873 00:15:29.855 11:29:35 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58873 ']' 00:15:29.855 11:29:35 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58873 00:15:29.855 11:29:35 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:15:29.855 11:29:35 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:29.855 11:29:35 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58873 00:15:29.855 killing process with pid 58873 00:15:29.855 11:29:35 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:29.855 11:29:35 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:29.855 11:29:35 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58873' 00:15:29.855 11:29:35 alias_rpc -- common/autotest_common.sh@973 -- # kill 58873 00:15:29.855 11:29:35 alias_rpc -- common/autotest_common.sh@978 -- # wait 58873 00:15:32.439 ************************************ 00:15:32.439 END TEST alias_rpc 00:15:32.439 ************************************ 00:15:32.439 00:15:32.439 real 0m4.063s 00:15:32.439 user 0m4.184s 00:15:32.439 sys 0m0.630s 00:15:32.439 11:29:37 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:32.439 11:29:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:32.439 11:29:37 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:15:32.439 11:29:37 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:15:32.439 11:29:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:32.439 11:29:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:32.440 11:29:37 -- common/autotest_common.sh@10 -- # set +x 00:15:32.440 ************************************ 00:15:32.440 START TEST spdkcli_tcp 00:15:32.440 ************************************ 00:15:32.440 11:29:37 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:15:32.440 * Looking for test storage... 00:15:32.440 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:15:32.440 11:29:37 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:32.440 11:29:37 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:15:32.440 11:29:37 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:32.440 11:29:37 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:32.440 11:29:37 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:32.440 11:29:37 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:32.440 11:29:37 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:32.440 11:29:37 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:15:32.440 11:29:37 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:15:32.440 11:29:37 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:15:32.440 11:29:37 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:15:32.440 11:29:37 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:15:32.440 11:29:37 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:15:32.440 11:29:37 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:15:32.440 11:29:37 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:32.440 11:29:37 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:15:32.440 11:29:37 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:15:32.440 11:29:37 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:32.440 11:29:37 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:32.440 11:29:37 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:15:32.440 11:29:37 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:15:32.440 11:29:37 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:32.440 11:29:37 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:15:32.440 11:29:37 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:15:32.440 11:29:37 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:15:32.440 11:29:37 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:15:32.440 11:29:37 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:32.440 11:29:37 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:15:32.440 11:29:37 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:15:32.440 11:29:37 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:32.440 11:29:37 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:32.440 11:29:37 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:15:32.440 11:29:37 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:32.440 11:29:37 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:32.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.440 --rc genhtml_branch_coverage=1 00:15:32.440 --rc genhtml_function_coverage=1 00:15:32.440 --rc genhtml_legend=1 00:15:32.440 --rc geninfo_all_blocks=1 00:15:32.440 --rc geninfo_unexecuted_blocks=1 00:15:32.440 00:15:32.440 ' 00:15:32.440 11:29:37 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:32.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.440 --rc genhtml_branch_coverage=1 00:15:32.440 --rc genhtml_function_coverage=1 00:15:32.440 --rc genhtml_legend=1 00:15:32.440 --rc geninfo_all_blocks=1 00:15:32.440 --rc geninfo_unexecuted_blocks=1 00:15:32.440 00:15:32.440 ' 00:15:32.440 11:29:37 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:32.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.440 --rc genhtml_branch_coverage=1 00:15:32.440 --rc genhtml_function_coverage=1 00:15:32.440 --rc genhtml_legend=1 00:15:32.440 --rc geninfo_all_blocks=1 00:15:32.440 --rc geninfo_unexecuted_blocks=1 00:15:32.440 00:15:32.440 ' 00:15:32.440 11:29:37 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:32.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.440 --rc genhtml_branch_coverage=1 00:15:32.440 --rc genhtml_function_coverage=1 00:15:32.440 --rc genhtml_legend=1 00:15:32.440 --rc geninfo_all_blocks=1 00:15:32.440 --rc geninfo_unexecuted_blocks=1 00:15:32.440 00:15:32.440 ' 00:15:32.440 11:29:37 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:15:32.440 11:29:37 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:15:32.440 11:29:37 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:15:32.440 11:29:37 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:15:32.440 11:29:37 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:15:32.440 11:29:37 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:32.440 11:29:37 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:15:32.440 11:29:37 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:32.440 11:29:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:32.440 11:29:37 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58980 00:15:32.440 11:29:37 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58980 00:15:32.440 11:29:37 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58980 ']' 00:15:32.440 11:29:37 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:15:32.440 11:29:37 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.440 11:29:37 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:32.440 11:29:37 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.440 11:29:37 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:32.440 11:29:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:32.440 [2024-11-20 11:29:38.047162] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:15:32.440 [2024-11-20 11:29:38.047321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58980 ] 00:15:32.699 [2024-11-20 11:29:38.228436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:32.699 [2024-11-20 11:29:38.385954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.699 [2024-11-20 11:29:38.385965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.633 11:29:39 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:33.633 11:29:39 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:15:33.633 11:29:39 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59003 00:15:33.633 11:29:39 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:15:33.633 11:29:39 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:15:33.891 [ 00:15:33.891 "bdev_malloc_delete", 00:15:33.891 "bdev_malloc_create", 00:15:33.891 "bdev_null_resize", 00:15:33.891 "bdev_null_delete", 00:15:33.891 "bdev_null_create", 00:15:33.892 "bdev_nvme_cuse_unregister", 00:15:33.892 "bdev_nvme_cuse_register", 00:15:33.892 "bdev_opal_new_user", 00:15:33.892 "bdev_opal_set_lock_state", 00:15:33.892 "bdev_opal_delete", 00:15:33.892 "bdev_opal_get_info", 00:15:33.892 "bdev_opal_create", 00:15:33.892 "bdev_nvme_opal_revert", 00:15:33.892 "bdev_nvme_opal_init", 00:15:33.892 "bdev_nvme_send_cmd", 00:15:33.892 "bdev_nvme_set_keys", 00:15:33.892 "bdev_nvme_get_path_iostat", 00:15:33.892 "bdev_nvme_get_mdns_discovery_info", 00:15:33.892 "bdev_nvme_stop_mdns_discovery", 00:15:33.892 "bdev_nvme_start_mdns_discovery", 00:15:33.892 "bdev_nvme_set_multipath_policy", 00:15:33.892 "bdev_nvme_set_preferred_path", 00:15:33.892 "bdev_nvme_get_io_paths", 00:15:33.892 "bdev_nvme_remove_error_injection", 00:15:33.892 "bdev_nvme_add_error_injection", 00:15:33.892 "bdev_nvme_get_discovery_info", 00:15:33.892 "bdev_nvme_stop_discovery", 00:15:33.892 "bdev_nvme_start_discovery", 00:15:33.892 "bdev_nvme_get_controller_health_info", 00:15:33.892 "bdev_nvme_disable_controller", 00:15:33.892 "bdev_nvme_enable_controller", 00:15:33.892 "bdev_nvme_reset_controller", 00:15:33.892 "bdev_nvme_get_transport_statistics", 00:15:33.892 "bdev_nvme_apply_firmware", 00:15:33.892 "bdev_nvme_detach_controller", 00:15:33.892 "bdev_nvme_get_controllers", 00:15:33.892 "bdev_nvme_attach_controller", 00:15:33.892 "bdev_nvme_set_hotplug", 00:15:33.892 "bdev_nvme_set_options", 00:15:33.892 "bdev_passthru_delete", 00:15:33.892 "bdev_passthru_create", 00:15:33.892 "bdev_lvol_set_parent_bdev", 00:15:33.892 "bdev_lvol_set_parent", 00:15:33.892 "bdev_lvol_check_shallow_copy", 00:15:33.892 "bdev_lvol_start_shallow_copy", 00:15:33.892 "bdev_lvol_grow_lvstore", 00:15:33.892 "bdev_lvol_get_lvols", 00:15:33.892 "bdev_lvol_get_lvstores", 00:15:33.892 "bdev_lvol_delete", 00:15:33.892 "bdev_lvol_set_read_only", 00:15:33.892 "bdev_lvol_resize", 00:15:33.892 "bdev_lvol_decouple_parent", 00:15:33.892 "bdev_lvol_inflate", 00:15:33.892 "bdev_lvol_rename", 00:15:33.892 "bdev_lvol_clone_bdev", 00:15:33.892 "bdev_lvol_clone", 00:15:33.892 "bdev_lvol_snapshot", 00:15:33.892 "bdev_lvol_create", 00:15:33.892 "bdev_lvol_delete_lvstore", 00:15:33.892 "bdev_lvol_rename_lvstore", 00:15:33.892 "bdev_lvol_create_lvstore", 00:15:33.892 "bdev_raid_set_options", 00:15:33.892 "bdev_raid_remove_base_bdev", 00:15:33.892 "bdev_raid_add_base_bdev", 00:15:33.892 "bdev_raid_delete", 00:15:33.892 "bdev_raid_create", 00:15:33.892 "bdev_raid_get_bdevs", 00:15:33.892 "bdev_error_inject_error", 00:15:33.892 "bdev_error_delete", 00:15:33.892 "bdev_error_create", 00:15:33.892 "bdev_split_delete", 00:15:33.892 "bdev_split_create", 00:15:33.892 "bdev_delay_delete", 00:15:33.892 "bdev_delay_create", 00:15:33.892 "bdev_delay_update_latency", 00:15:33.892 "bdev_zone_block_delete", 00:15:33.892 "bdev_zone_block_create", 00:15:33.892 "blobfs_create", 00:15:33.892 "blobfs_detect", 00:15:33.892 "blobfs_set_cache_size", 00:15:33.892 "bdev_xnvme_delete", 00:15:33.892 "bdev_xnvme_create", 00:15:33.892 "bdev_aio_delete", 00:15:33.892 "bdev_aio_rescan", 00:15:33.892 "bdev_aio_create", 00:15:33.892 "bdev_ftl_set_property", 00:15:33.892 "bdev_ftl_get_properties", 00:15:33.892 "bdev_ftl_get_stats", 00:15:33.892 "bdev_ftl_unmap", 00:15:33.892 "bdev_ftl_unload", 00:15:33.892 "bdev_ftl_delete", 00:15:33.892 "bdev_ftl_load", 00:15:33.892 "bdev_ftl_create", 00:15:33.892 "bdev_virtio_attach_controller", 00:15:33.892 "bdev_virtio_scsi_get_devices", 00:15:33.892 "bdev_virtio_detach_controller", 00:15:33.892 "bdev_virtio_blk_set_hotplug", 00:15:33.892 "bdev_iscsi_delete", 00:15:33.892 "bdev_iscsi_create", 00:15:33.892 "bdev_iscsi_set_options", 00:15:33.892 "accel_error_inject_error", 00:15:33.892 "ioat_scan_accel_module", 00:15:33.892 "dsa_scan_accel_module", 00:15:33.892 "iaa_scan_accel_module", 00:15:33.892 "keyring_file_remove_key", 00:15:33.892 "keyring_file_add_key", 00:15:33.892 "keyring_linux_set_options", 00:15:33.892 "fsdev_aio_delete", 00:15:33.892 "fsdev_aio_create", 00:15:33.892 "iscsi_get_histogram", 00:15:33.892 "iscsi_enable_histogram", 00:15:33.892 "iscsi_set_options", 00:15:33.892 "iscsi_get_auth_groups", 00:15:33.892 "iscsi_auth_group_remove_secret", 00:15:33.892 "iscsi_auth_group_add_secret", 00:15:33.892 "iscsi_delete_auth_group", 00:15:33.892 "iscsi_create_auth_group", 00:15:33.892 "iscsi_set_discovery_auth", 00:15:33.892 "iscsi_get_options", 00:15:33.892 "iscsi_target_node_request_logout", 00:15:33.892 "iscsi_target_node_set_redirect", 00:15:33.892 "iscsi_target_node_set_auth", 00:15:33.892 "iscsi_target_node_add_lun", 00:15:33.892 "iscsi_get_stats", 00:15:33.892 "iscsi_get_connections", 00:15:33.892 "iscsi_portal_group_set_auth", 00:15:33.892 "iscsi_start_portal_group", 00:15:33.892 "iscsi_delete_portal_group", 00:15:33.892 "iscsi_create_portal_group", 00:15:33.892 "iscsi_get_portal_groups", 00:15:33.892 "iscsi_delete_target_node", 00:15:33.892 "iscsi_target_node_remove_pg_ig_maps", 00:15:33.892 "iscsi_target_node_add_pg_ig_maps", 00:15:33.892 "iscsi_create_target_node", 00:15:33.892 "iscsi_get_target_nodes", 00:15:33.892 "iscsi_delete_initiator_group", 00:15:33.892 "iscsi_initiator_group_remove_initiators", 00:15:33.892 "iscsi_initiator_group_add_initiators", 00:15:33.892 "iscsi_create_initiator_group", 00:15:33.892 "iscsi_get_initiator_groups", 00:15:33.892 "nvmf_set_crdt", 00:15:33.892 "nvmf_set_config", 00:15:33.892 "nvmf_set_max_subsystems", 00:15:33.892 "nvmf_stop_mdns_prr", 00:15:33.892 "nvmf_publish_mdns_prr", 00:15:33.892 "nvmf_subsystem_get_listeners", 00:15:33.892 "nvmf_subsystem_get_qpairs", 00:15:33.892 "nvmf_subsystem_get_controllers", 00:15:33.892 "nvmf_get_stats", 00:15:33.892 "nvmf_get_transports", 00:15:33.892 "nvmf_create_transport", 00:15:33.892 "nvmf_get_targets", 00:15:33.892 "nvmf_delete_target", 00:15:33.892 "nvmf_create_target", 00:15:33.892 "nvmf_subsystem_allow_any_host", 00:15:33.892 "nvmf_subsystem_set_keys", 00:15:33.892 "nvmf_subsystem_remove_host", 00:15:33.892 "nvmf_subsystem_add_host", 00:15:33.892 "nvmf_ns_remove_host", 00:15:33.892 "nvmf_ns_add_host", 00:15:33.892 "nvmf_subsystem_remove_ns", 00:15:33.892 "nvmf_subsystem_set_ns_ana_group", 00:15:33.892 "nvmf_subsystem_add_ns", 00:15:33.892 "nvmf_subsystem_listener_set_ana_state", 00:15:33.892 "nvmf_discovery_get_referrals", 00:15:33.892 "nvmf_discovery_remove_referral", 00:15:33.892 "nvmf_discovery_add_referral", 00:15:33.892 "nvmf_subsystem_remove_listener", 00:15:33.892 "nvmf_subsystem_add_listener", 00:15:33.892 "nvmf_delete_subsystem", 00:15:33.892 "nvmf_create_subsystem", 00:15:33.892 "nvmf_get_subsystems", 00:15:33.892 "env_dpdk_get_mem_stats", 00:15:33.892 "nbd_get_disks", 00:15:33.892 "nbd_stop_disk", 00:15:33.892 "nbd_start_disk", 00:15:33.892 "ublk_recover_disk", 00:15:33.892 "ublk_get_disks", 00:15:33.892 "ublk_stop_disk", 00:15:33.892 "ublk_start_disk", 00:15:33.892 "ublk_destroy_target", 00:15:33.892 "ublk_create_target", 00:15:33.892 "virtio_blk_create_transport", 00:15:33.892 "virtio_blk_get_transports", 00:15:33.892 "vhost_controller_set_coalescing", 00:15:33.892 "vhost_get_controllers", 00:15:33.892 "vhost_delete_controller", 00:15:33.892 "vhost_create_blk_controller", 00:15:33.892 "vhost_scsi_controller_remove_target", 00:15:33.892 "vhost_scsi_controller_add_target", 00:15:33.892 "vhost_start_scsi_controller", 00:15:33.892 "vhost_create_scsi_controller", 00:15:33.892 "thread_set_cpumask", 00:15:33.892 "scheduler_set_options", 00:15:33.892 "framework_get_governor", 00:15:33.892 "framework_get_scheduler", 00:15:33.892 "framework_set_scheduler", 00:15:33.892 "framework_get_reactors", 00:15:33.893 "thread_get_io_channels", 00:15:33.893 "thread_get_pollers", 00:15:33.893 "thread_get_stats", 00:15:33.893 "framework_monitor_context_switch", 00:15:33.893 "spdk_kill_instance", 00:15:33.893 "log_enable_timestamps", 00:15:33.893 "log_get_flags", 00:15:33.893 "log_clear_flag", 00:15:33.893 "log_set_flag", 00:15:33.893 "log_get_level", 00:15:33.893 "log_set_level", 00:15:33.893 "log_get_print_level", 00:15:33.893 "log_set_print_level", 00:15:33.893 "framework_enable_cpumask_locks", 00:15:33.893 "framework_disable_cpumask_locks", 00:15:33.893 "framework_wait_init", 00:15:33.893 "framework_start_init", 00:15:33.893 "scsi_get_devices", 00:15:33.893 "bdev_get_histogram", 00:15:33.893 "bdev_enable_histogram", 00:15:33.893 "bdev_set_qos_limit", 00:15:33.893 "bdev_set_qd_sampling_period", 00:15:33.893 "bdev_get_bdevs", 00:15:33.893 "bdev_reset_iostat", 00:15:33.893 "bdev_get_iostat", 00:15:33.893 "bdev_examine", 00:15:33.893 "bdev_wait_for_examine", 00:15:33.893 "bdev_set_options", 00:15:33.893 "accel_get_stats", 00:15:33.893 "accel_set_options", 00:15:33.893 "accel_set_driver", 00:15:33.893 "accel_crypto_key_destroy", 00:15:33.893 "accel_crypto_keys_get", 00:15:33.893 "accel_crypto_key_create", 00:15:33.893 "accel_assign_opc", 00:15:33.893 "accel_get_module_info", 00:15:33.893 "accel_get_opc_assignments", 00:15:33.893 "vmd_rescan", 00:15:33.893 "vmd_remove_device", 00:15:33.893 "vmd_enable", 00:15:33.893 "sock_get_default_impl", 00:15:33.893 "sock_set_default_impl", 00:15:33.893 "sock_impl_set_options", 00:15:33.893 "sock_impl_get_options", 00:15:33.893 "iobuf_get_stats", 00:15:33.893 "iobuf_set_options", 00:15:33.893 "keyring_get_keys", 00:15:33.893 "framework_get_pci_devices", 00:15:33.893 "framework_get_config", 00:15:33.893 "framework_get_subsystems", 00:15:33.893 "fsdev_set_opts", 00:15:33.893 "fsdev_get_opts", 00:15:33.893 "trace_get_info", 00:15:33.893 "trace_get_tpoint_group_mask", 00:15:33.893 "trace_disable_tpoint_group", 00:15:33.893 "trace_enable_tpoint_group", 00:15:33.893 "trace_clear_tpoint_mask", 00:15:33.893 "trace_set_tpoint_mask", 00:15:33.893 "notify_get_notifications", 00:15:33.893 "notify_get_types", 00:15:33.893 "spdk_get_version", 00:15:33.893 "rpc_get_methods" 00:15:33.893 ] 00:15:33.893 11:29:39 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:15:33.893 11:29:39 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:33.893 11:29:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:33.893 11:29:39 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:33.893 11:29:39 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58980 00:15:33.893 11:29:39 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58980 ']' 00:15:33.893 11:29:39 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58980 00:15:33.893 11:29:39 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:15:33.893 11:29:39 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:33.893 11:29:39 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58980 00:15:33.893 11:29:39 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:34.151 killing process with pid 58980 00:15:34.151 11:29:39 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:34.151 11:29:39 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58980' 00:15:34.151 11:29:39 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58980 00:15:34.151 11:29:39 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58980 00:15:36.787 00:15:36.787 real 0m4.222s 00:15:36.787 user 0m7.707s 00:15:36.787 sys 0m0.629s 00:15:36.787 11:29:41 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:36.787 11:29:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:36.787 ************************************ 00:15:36.787 END TEST spdkcli_tcp 00:15:36.787 ************************************ 00:15:36.787 11:29:42 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:15:36.787 11:29:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:36.787 11:29:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:36.787 11:29:42 -- common/autotest_common.sh@10 -- # set +x 00:15:36.787 ************************************ 00:15:36.787 START TEST dpdk_mem_utility 00:15:36.787 ************************************ 00:15:36.788 11:29:42 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:15:36.788 * Looking for test storage... 00:15:36.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:15:36.788 11:29:42 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:36.788 11:29:42 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:15:36.788 11:29:42 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:36.788 11:29:42 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:36.788 11:29:42 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:36.788 11:29:42 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:36.788 11:29:42 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:36.788 11:29:42 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:15:36.788 11:29:42 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:15:36.788 11:29:42 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:15:36.788 11:29:42 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:15:36.788 11:29:42 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:15:36.788 11:29:42 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:15:36.788 11:29:42 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:15:36.788 11:29:42 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:36.788 11:29:42 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:15:36.788 11:29:42 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:15:36.788 11:29:42 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:36.788 11:29:42 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:36.788 11:29:42 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:15:36.788 11:29:42 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:15:36.788 11:29:42 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:36.788 11:29:42 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:15:36.788 11:29:42 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:15:36.788 11:29:42 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:15:36.788 11:29:42 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:15:36.788 11:29:42 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:36.788 11:29:42 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:15:36.788 11:29:42 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:15:36.788 11:29:42 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:36.788 11:29:42 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:36.788 11:29:42 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:15:36.788 11:29:42 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:36.788 11:29:42 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:36.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.788 --rc genhtml_branch_coverage=1 00:15:36.788 --rc genhtml_function_coverage=1 00:15:36.788 --rc genhtml_legend=1 00:15:36.788 --rc geninfo_all_blocks=1 00:15:36.788 --rc geninfo_unexecuted_blocks=1 00:15:36.788 00:15:36.788 ' 00:15:36.788 11:29:42 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:36.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.788 --rc genhtml_branch_coverage=1 00:15:36.788 --rc genhtml_function_coverage=1 00:15:36.788 --rc genhtml_legend=1 00:15:36.788 --rc geninfo_all_blocks=1 00:15:36.788 --rc geninfo_unexecuted_blocks=1 00:15:36.788 00:15:36.788 ' 00:15:36.788 11:29:42 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:36.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.788 --rc genhtml_branch_coverage=1 00:15:36.788 --rc genhtml_function_coverage=1 00:15:36.788 --rc genhtml_legend=1 00:15:36.788 --rc geninfo_all_blocks=1 00:15:36.788 --rc geninfo_unexecuted_blocks=1 00:15:36.788 00:15:36.788 ' 00:15:36.788 11:29:42 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:36.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.788 --rc genhtml_branch_coverage=1 00:15:36.788 --rc genhtml_function_coverage=1 00:15:36.788 --rc genhtml_legend=1 00:15:36.788 --rc geninfo_all_blocks=1 00:15:36.788 --rc geninfo_unexecuted_blocks=1 00:15:36.788 00:15:36.788 ' 00:15:36.788 11:29:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:15:36.788 11:29:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59102 00:15:36.788 11:29:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:36.788 11:29:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59102 00:15:36.788 11:29:42 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59102 ']' 00:15:36.788 11:29:42 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.788 11:29:42 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:36.788 11:29:42 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.788 11:29:42 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:36.788 11:29:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:15:36.788 [2024-11-20 11:29:42.372812] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:15:36.788 [2024-11-20 11:29:42.373206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59102 ] 00:15:37.046 [2024-11-20 11:29:42.551429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.046 [2024-11-20 11:29:42.682758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.981 11:29:43 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:37.981 11:29:43 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:15:37.981 11:29:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:15:37.981 11:29:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:15:37.981 11:29:43 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.981 11:29:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:15:37.981 { 00:15:37.981 "filename": "/tmp/spdk_mem_dump.txt" 00:15:37.981 } 00:15:37.981 11:29:43 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.981 11:29:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:15:37.981 DPDK memory size 816.000000 MiB in 1 heap(s) 00:15:37.981 1 heaps totaling size 816.000000 MiB 00:15:37.981 size: 816.000000 MiB heap id: 0 00:15:37.981 end heaps---------- 00:15:37.981 9 mempools totaling size 595.772034 MiB 00:15:37.981 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:15:37.981 size: 158.602051 MiB name: PDU_data_out_Pool 00:15:37.981 size: 92.545471 MiB name: bdev_io_59102 00:15:37.981 size: 50.003479 MiB name: msgpool_59102 00:15:37.981 size: 36.509338 MiB name: fsdev_io_59102 00:15:37.981 size: 21.763794 MiB name: PDU_Pool 00:15:37.981 size: 19.513306 MiB name: SCSI_TASK_Pool 00:15:37.981 size: 4.133484 MiB name: evtpool_59102 00:15:37.981 size: 0.026123 MiB name: Session_Pool 00:15:37.981 end mempools------- 00:15:37.981 6 memzones totaling size 4.142822 MiB 00:15:37.981 size: 1.000366 MiB name: RG_ring_0_59102 00:15:37.981 size: 1.000366 MiB name: RG_ring_1_59102 00:15:37.981 size: 1.000366 MiB name: RG_ring_4_59102 00:15:37.981 size: 1.000366 MiB name: RG_ring_5_59102 00:15:37.981 size: 0.125366 MiB name: RG_ring_2_59102 00:15:37.981 size: 0.015991 MiB name: RG_ring_3_59102 00:15:37.981 end memzones------- 00:15:37.981 11:29:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:15:38.242 heap id: 0 total size: 816.000000 MiB number of busy elements: 311 number of free elements: 18 00:15:38.242 list of free elements. size: 16.792358 MiB 00:15:38.242 element at address: 0x200006400000 with size: 1.995972 MiB 00:15:38.242 element at address: 0x20000a600000 with size: 1.995972 MiB 00:15:38.242 element at address: 0x200003e00000 with size: 1.991028 MiB 00:15:38.242 element at address: 0x200018d00040 with size: 0.999939 MiB 00:15:38.242 element at address: 0x200019100040 with size: 0.999939 MiB 00:15:38.242 element at address: 0x200019200000 with size: 0.999084 MiB 00:15:38.242 element at address: 0x200031e00000 with size: 0.994324 MiB 00:15:38.242 element at address: 0x200000400000 with size: 0.992004 MiB 00:15:38.242 element at address: 0x200018a00000 with size: 0.959656 MiB 00:15:38.242 element at address: 0x200019500040 with size: 0.936401 MiB 00:15:38.242 element at address: 0x200000200000 with size: 0.716980 MiB 00:15:38.242 element at address: 0x20001ac00000 with size: 0.562927 MiB 00:15:38.242 element at address: 0x200000c00000 with size: 0.490173 MiB 00:15:38.242 element at address: 0x200018e00000 with size: 0.487976 MiB 00:15:38.242 element at address: 0x200019600000 with size: 0.485413 MiB 00:15:38.242 element at address: 0x200012c00000 with size: 0.443237 MiB 00:15:38.242 element at address: 0x200028000000 with size: 0.390442 MiB 00:15:38.242 element at address: 0x200000800000 with size: 0.350891 MiB 00:15:38.242 list of standard malloc elements. size: 199.286743 MiB 00:15:38.242 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:15:38.242 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:15:38.242 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:15:38.242 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:15:38.242 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:15:38.242 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:15:38.242 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:15:38.242 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:15:38.242 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:15:38.242 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:15:38.242 element at address: 0x200012bff040 with size: 0.000305 MiB 00:15:38.242 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:15:38.242 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:15:38.242 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:15:38.242 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:15:38.242 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:15:38.242 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:15:38.242 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:15:38.242 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:15:38.242 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:15:38.242 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:15:38.242 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:15:38.242 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:15:38.242 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:15:38.242 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:15:38.243 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:15:38.243 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:15:38.243 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:15:38.243 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:15:38.243 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:15:38.243 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:15:38.243 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:15:38.243 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:15:38.243 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:15:38.243 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:15:38.243 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:15:38.243 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:15:38.243 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:15:38.243 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:15:38.243 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:15:38.243 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:15:38.243 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200000cff000 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200012bff180 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200012bff280 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200012bff380 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200012bff480 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200012bff580 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200012bff680 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200012bff780 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200012bff880 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200012bff980 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200012c71780 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200012c71880 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200012c71980 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200012c72080 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200012c72180 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:15:38.243 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:15:38.243 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:15:38.243 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:15:38.244 element at address: 0x200028063f40 with size: 0.000244 MiB 00:15:38.244 element at address: 0x200028064040 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806af80 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806b080 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806b180 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806b280 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806b380 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806b480 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806b580 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806b680 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806b780 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806b880 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806b980 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806be80 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806c080 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806c180 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806c280 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806c380 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806c480 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806c580 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806c680 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806c780 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806c880 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806c980 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806d080 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806d180 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806d280 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806d380 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806d480 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806d580 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806d680 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806d780 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806d880 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806d980 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806da80 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806db80 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806de80 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806df80 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806e080 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806e180 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806e280 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806e380 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806e480 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806e580 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806e680 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806e780 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806e880 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806e980 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806f080 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806f180 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806f280 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806f380 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806f480 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806f580 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806f680 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806f780 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806f880 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806f980 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:15:38.244 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:15:38.244 list of memzone associated elements. size: 599.920898 MiB 00:15:38.244 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:15:38.244 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:15:38.244 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:15:38.244 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:15:38.244 element at address: 0x200012df4740 with size: 92.045105 MiB 00:15:38.244 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_59102_0 00:15:38.244 element at address: 0x200000dff340 with size: 48.003113 MiB 00:15:38.244 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59102_0 00:15:38.244 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:15:38.244 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59102_0 00:15:38.244 element at address: 0x2000197be900 with size: 20.255615 MiB 00:15:38.244 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:15:38.244 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:15:38.244 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:15:38.244 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:15:38.244 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59102_0 00:15:38.245 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:15:38.245 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59102 00:15:38.245 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:15:38.245 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59102 00:15:38.245 element at address: 0x200018efde00 with size: 1.008179 MiB 00:15:38.245 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:15:38.245 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:15:38.245 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:15:38.245 element at address: 0x200018afde00 with size: 1.008179 MiB 00:15:38.245 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:15:38.245 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:15:38.245 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:15:38.245 element at address: 0x200000cff100 with size: 1.000549 MiB 00:15:38.245 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59102 00:15:38.245 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:15:38.245 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59102 00:15:38.245 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:15:38.245 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59102 00:15:38.245 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:15:38.245 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59102 00:15:38.245 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:15:38.245 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59102 00:15:38.245 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:15:38.245 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59102 00:15:38.245 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:15:38.245 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:15:38.245 element at address: 0x200012c72280 with size: 0.500549 MiB 00:15:38.245 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:15:38.245 element at address: 0x20001967c440 with size: 0.250549 MiB 00:15:38.245 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:15:38.245 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:15:38.245 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59102 00:15:38.245 element at address: 0x20000085df80 with size: 0.125549 MiB 00:15:38.245 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59102 00:15:38.245 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:15:38.245 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:15:38.245 element at address: 0x200028064140 with size: 0.023804 MiB 00:15:38.245 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:15:38.245 element at address: 0x200000859d40 with size: 0.016174 MiB 00:15:38.245 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59102 00:15:38.245 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:15:38.245 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:15:38.245 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:15:38.245 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59102 00:15:38.245 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:15:38.245 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59102 00:15:38.245 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:15:38.245 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59102 00:15:38.245 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:15:38.245 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:15:38.245 11:29:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:15:38.245 11:29:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59102 00:15:38.245 11:29:43 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59102 ']' 00:15:38.245 11:29:43 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59102 00:15:38.245 11:29:43 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:15:38.245 11:29:43 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:38.245 11:29:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59102 00:15:38.245 killing process with pid 59102 00:15:38.245 11:29:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:38.245 11:29:43 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:38.245 11:29:43 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59102' 00:15:38.245 11:29:43 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59102 00:15:38.245 11:29:43 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59102 00:15:40.779 00:15:40.779 real 0m4.141s 00:15:40.779 user 0m4.123s 00:15:40.779 sys 0m0.628s 00:15:40.779 11:29:46 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:40.779 ************************************ 00:15:40.779 END TEST dpdk_mem_utility 00:15:40.779 ************************************ 00:15:40.779 11:29:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:15:40.779 11:29:46 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:15:40.779 11:29:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:40.779 11:29:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:40.779 11:29:46 -- common/autotest_common.sh@10 -- # set +x 00:15:40.779 ************************************ 00:15:40.779 START TEST event 00:15:40.779 ************************************ 00:15:40.779 11:29:46 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:15:40.779 * Looking for test storage... 00:15:40.779 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:15:40.779 11:29:46 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:40.779 11:29:46 event -- common/autotest_common.sh@1693 -- # lcov --version 00:15:40.779 11:29:46 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:40.779 11:29:46 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:40.779 11:29:46 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:40.779 11:29:46 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:40.779 11:29:46 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:40.779 11:29:46 event -- scripts/common.sh@336 -- # IFS=.-: 00:15:40.779 11:29:46 event -- scripts/common.sh@336 -- # read -ra ver1 00:15:40.779 11:29:46 event -- scripts/common.sh@337 -- # IFS=.-: 00:15:40.779 11:29:46 event -- scripts/common.sh@337 -- # read -ra ver2 00:15:40.779 11:29:46 event -- scripts/common.sh@338 -- # local 'op=<' 00:15:40.779 11:29:46 event -- scripts/common.sh@340 -- # ver1_l=2 00:15:40.779 11:29:46 event -- scripts/common.sh@341 -- # ver2_l=1 00:15:40.779 11:29:46 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:40.779 11:29:46 event -- scripts/common.sh@344 -- # case "$op" in 00:15:40.779 11:29:46 event -- scripts/common.sh@345 -- # : 1 00:15:40.779 11:29:46 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:40.779 11:29:46 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:40.779 11:29:46 event -- scripts/common.sh@365 -- # decimal 1 00:15:40.779 11:29:46 event -- scripts/common.sh@353 -- # local d=1 00:15:40.779 11:29:46 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:40.779 11:29:46 event -- scripts/common.sh@355 -- # echo 1 00:15:40.779 11:29:46 event -- scripts/common.sh@365 -- # ver1[v]=1 00:15:40.779 11:29:46 event -- scripts/common.sh@366 -- # decimal 2 00:15:40.779 11:29:46 event -- scripts/common.sh@353 -- # local d=2 00:15:40.779 11:29:46 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:40.779 11:29:46 event -- scripts/common.sh@355 -- # echo 2 00:15:40.779 11:29:46 event -- scripts/common.sh@366 -- # ver2[v]=2 00:15:40.779 11:29:46 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:40.779 11:29:46 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:40.779 11:29:46 event -- scripts/common.sh@368 -- # return 0 00:15:40.779 11:29:46 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:40.779 11:29:46 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:40.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.779 --rc genhtml_branch_coverage=1 00:15:40.779 --rc genhtml_function_coverage=1 00:15:40.779 --rc genhtml_legend=1 00:15:40.779 --rc geninfo_all_blocks=1 00:15:40.779 --rc geninfo_unexecuted_blocks=1 00:15:40.779 00:15:40.779 ' 00:15:40.779 11:29:46 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:40.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.779 --rc genhtml_branch_coverage=1 00:15:40.779 --rc genhtml_function_coverage=1 00:15:40.779 --rc genhtml_legend=1 00:15:40.779 --rc geninfo_all_blocks=1 00:15:40.779 --rc geninfo_unexecuted_blocks=1 00:15:40.779 00:15:40.779 ' 00:15:40.779 11:29:46 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:40.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.779 --rc genhtml_branch_coverage=1 00:15:40.779 --rc genhtml_function_coverage=1 00:15:40.779 --rc genhtml_legend=1 00:15:40.779 --rc geninfo_all_blocks=1 00:15:40.779 --rc geninfo_unexecuted_blocks=1 00:15:40.779 00:15:40.779 ' 00:15:40.779 11:29:46 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:40.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.779 --rc genhtml_branch_coverage=1 00:15:40.779 --rc genhtml_function_coverage=1 00:15:40.779 --rc genhtml_legend=1 00:15:40.779 --rc geninfo_all_blocks=1 00:15:40.779 --rc geninfo_unexecuted_blocks=1 00:15:40.779 00:15:40.779 ' 00:15:40.779 11:29:46 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:40.779 11:29:46 event -- bdev/nbd_common.sh@6 -- # set -e 00:15:40.779 11:29:46 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:15:40.779 11:29:46 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:15:40.779 11:29:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:40.779 11:29:46 event -- common/autotest_common.sh@10 -- # set +x 00:15:40.779 ************************************ 00:15:40.779 START TEST event_perf 00:15:40.779 ************************************ 00:15:40.779 11:29:46 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:15:40.779 Running I/O for 1 seconds...[2024-11-20 11:29:46.426172] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:15:40.779 [2024-11-20 11:29:46.426632] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59216 ] 00:15:41.038 [2024-11-20 11:29:46.614967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:41.038 [2024-11-20 11:29:46.778842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.038 [2024-11-20 11:29:46.778921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:41.038 [2024-11-20 11:29:46.779082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.038 [2024-11-20 11:29:46.779086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:42.432 Running I/O for 1 seconds... 00:15:42.432 lcore 0: 130579 00:15:42.432 lcore 1: 130577 00:15:42.432 lcore 2: 130576 00:15:42.432 lcore 3: 130577 00:15:42.432 done. 00:15:42.432 00:15:42.432 real 0m1.671s 00:15:42.432 user 0m4.395s 00:15:42.432 sys 0m0.151s 00:15:42.432 ************************************ 00:15:42.432 END TEST event_perf 00:15:42.432 ************************************ 00:15:42.432 11:29:48 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:42.432 11:29:48 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:15:42.432 11:29:48 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:15:42.432 11:29:48 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:42.432 11:29:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:42.432 11:29:48 event -- common/autotest_common.sh@10 -- # set +x 00:15:42.432 ************************************ 00:15:42.432 START TEST event_reactor 00:15:42.432 ************************************ 00:15:42.432 11:29:48 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:15:42.432 [2024-11-20 11:29:48.137805] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:15:42.432 [2024-11-20 11:29:48.137998] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59255 ] 00:15:42.691 [2024-11-20 11:29:48.338809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.950 [2024-11-20 11:29:48.493062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.357 test_start 00:15:44.357 oneshot 00:15:44.357 tick 100 00:15:44.357 tick 100 00:15:44.357 tick 250 00:15:44.357 tick 100 00:15:44.357 tick 100 00:15:44.357 tick 100 00:15:44.357 tick 250 00:15:44.357 tick 500 00:15:44.357 tick 100 00:15:44.357 tick 100 00:15:44.357 tick 250 00:15:44.357 tick 100 00:15:44.357 tick 100 00:15:44.357 test_end 00:15:44.357 ************************************ 00:15:44.357 END TEST event_reactor 00:15:44.357 ************************************ 00:15:44.357 00:15:44.357 real 0m1.691s 00:15:44.357 user 0m1.457s 00:15:44.357 sys 0m0.122s 00:15:44.357 11:29:49 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:44.357 11:29:49 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:15:44.357 11:29:49 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:15:44.357 11:29:49 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:44.357 11:29:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:44.358 11:29:49 event -- common/autotest_common.sh@10 -- # set +x 00:15:44.358 ************************************ 00:15:44.358 START TEST event_reactor_perf 00:15:44.358 ************************************ 00:15:44.358 11:29:49 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:15:44.358 [2024-11-20 11:29:49.879324] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:15:44.358 [2024-11-20 11:29:49.879509] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59292 ] 00:15:44.358 [2024-11-20 11:29:50.070253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.616 [2024-11-20 11:29:50.204068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.991 test_start 00:15:45.991 test_end 00:15:45.991 Performance: 269197 events per second 00:15:45.992 00:15:45.992 real 0m1.605s 00:15:45.992 user 0m1.390s 00:15:45.992 sys 0m0.102s 00:15:45.992 ************************************ 00:15:45.992 END TEST event_reactor_perf 00:15:45.992 ************************************ 00:15:45.992 11:29:51 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:45.992 11:29:51 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:15:45.992 11:29:51 event -- event/event.sh@49 -- # uname -s 00:15:45.992 11:29:51 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:15:45.992 11:29:51 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:15:45.992 11:29:51 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:45.992 11:29:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:45.992 11:29:51 event -- common/autotest_common.sh@10 -- # set +x 00:15:45.992 ************************************ 00:15:45.992 START TEST event_scheduler 00:15:45.992 ************************************ 00:15:45.992 11:29:51 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:15:45.992 * Looking for test storage... 00:15:45.992 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:15:45.992 11:29:51 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:45.992 11:29:51 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:45.992 11:29:51 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:15:45.992 11:29:51 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:45.992 11:29:51 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:45.992 11:29:51 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:45.992 11:29:51 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:45.992 11:29:51 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:15:45.992 11:29:51 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:15:45.992 11:29:51 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:15:45.992 11:29:51 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:15:45.992 11:29:51 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:15:45.992 11:29:51 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:15:45.992 11:29:51 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:15:45.992 11:29:51 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:45.992 11:29:51 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:15:45.992 11:29:51 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:15:45.992 11:29:51 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:45.992 11:29:51 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:45.992 11:29:51 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:15:45.992 11:29:51 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:15:45.992 11:29:51 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:45.992 11:29:51 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:15:45.992 11:29:51 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:15:45.992 11:29:51 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:15:45.992 11:29:51 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:15:45.992 11:29:51 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:45.992 11:29:51 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:15:45.992 11:29:51 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:15:45.992 11:29:51 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:45.992 11:29:51 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:45.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.992 11:29:51 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:15:45.992 11:29:51 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:45.992 11:29:51 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:45.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.992 --rc genhtml_branch_coverage=1 00:15:45.992 --rc genhtml_function_coverage=1 00:15:45.992 --rc genhtml_legend=1 00:15:45.992 --rc geninfo_all_blocks=1 00:15:45.992 --rc geninfo_unexecuted_blocks=1 00:15:45.992 00:15:45.992 ' 00:15:45.992 11:29:51 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:45.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.992 --rc genhtml_branch_coverage=1 00:15:45.992 --rc genhtml_function_coverage=1 00:15:45.992 --rc genhtml_legend=1 00:15:45.992 --rc geninfo_all_blocks=1 00:15:45.992 --rc geninfo_unexecuted_blocks=1 00:15:45.992 00:15:45.992 ' 00:15:45.992 11:29:51 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:45.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.992 --rc genhtml_branch_coverage=1 00:15:45.992 --rc genhtml_function_coverage=1 00:15:45.992 --rc genhtml_legend=1 00:15:45.992 --rc geninfo_all_blocks=1 00:15:45.992 --rc geninfo_unexecuted_blocks=1 00:15:45.992 00:15:45.992 ' 00:15:45.992 11:29:51 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:45.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.992 --rc genhtml_branch_coverage=1 00:15:45.992 --rc genhtml_function_coverage=1 00:15:45.992 --rc genhtml_legend=1 00:15:45.992 --rc geninfo_all_blocks=1 00:15:45.992 --rc geninfo_unexecuted_blocks=1 00:15:45.992 00:15:45.992 ' 00:15:45.992 11:29:51 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:15:45.992 11:29:51 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59368 00:15:45.992 11:29:51 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:15:45.992 11:29:51 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:15:45.992 11:29:51 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59368 00:15:45.992 11:29:51 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59368 ']' 00:15:45.992 11:29:51 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.992 11:29:51 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:45.992 11:29:51 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.992 11:29:51 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:45.992 11:29:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:15:45.992 [2024-11-20 11:29:51.751888] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:15:45.992 [2024-11-20 11:29:51.752350] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59368 ] 00:15:46.300 [2024-11-20 11:29:51.946119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:46.567 [2024-11-20 11:29:52.112179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.567 [2024-11-20 11:29:52.112335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:46.567 [2024-11-20 11:29:52.112395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:46.567 [2024-11-20 11:29:52.112397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:47.134 11:29:52 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:47.134 11:29:52 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:15:47.134 11:29:52 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:15:47.134 11:29:52 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.134 11:29:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:15:47.134 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:15:47.134 POWER: Cannot set governor of lcore 0 to userspace 00:15:47.134 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:15:47.134 POWER: Cannot set governor of lcore 0 to performance 00:15:47.134 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:15:47.134 POWER: Cannot set governor of lcore 0 to userspace 00:15:47.134 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:15:47.134 POWER: Cannot set governor of lcore 0 to userspace 00:15:47.134 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:15:47.134 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:15:47.134 POWER: Unable to set Power Management Environment for lcore 0 00:15:47.134 [2024-11-20 11:29:52.699377] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:15:47.134 [2024-11-20 11:29:52.699410] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:15:47.134 [2024-11-20 11:29:52.699425] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:15:47.134 [2024-11-20 11:29:52.699448] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:15:47.134 [2024-11-20 11:29:52.699461] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:15:47.134 [2024-11-20 11:29:52.699476] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:15:47.134 11:29:52 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.134 11:29:52 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:15:47.134 11:29:52 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.134 11:29:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:15:47.393 [2024-11-20 11:29:53.029009] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:15:47.393 11:29:53 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.393 11:29:53 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:15:47.393 11:29:53 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:47.393 11:29:53 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:47.393 11:29:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:15:47.393 ************************************ 00:15:47.393 START TEST scheduler_create_thread 00:15:47.393 ************************************ 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:47.393 2 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:47.393 3 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:47.393 4 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:47.393 5 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:47.393 6 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:47.393 7 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:47.393 8 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:47.393 9 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:47.393 10 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.393 11:29:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:48.768 11:29:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.768 11:29:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:15:48.768 11:29:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:15:48.768 11:29:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.768 11:29:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:49.701 ************************************ 00:15:49.701 END TEST scheduler_create_thread 00:15:49.701 ************************************ 00:15:49.701 11:29:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.701 00:15:49.701 real 0m2.144s 00:15:49.701 user 0m0.018s 00:15:49.701 sys 0m0.005s 00:15:49.701 11:29:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:49.701 11:29:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:49.701 11:29:55 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:15:49.701 11:29:55 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59368 00:15:49.701 11:29:55 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59368 ']' 00:15:49.701 11:29:55 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59368 00:15:49.701 11:29:55 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:15:49.701 11:29:55 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:49.701 11:29:55 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59368 00:15:49.701 killing process with pid 59368 00:15:49.701 11:29:55 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:49.701 11:29:55 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:49.701 11:29:55 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59368' 00:15:49.701 11:29:55 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59368 00:15:49.701 11:29:55 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59368 00:15:49.960 [2024-11-20 11:29:55.663250] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:15:51.336 ************************************ 00:15:51.336 END TEST event_scheduler 00:15:51.336 ************************************ 00:15:51.336 00:15:51.336 real 0m5.414s 00:15:51.336 user 0m9.075s 00:15:51.336 sys 0m0.478s 00:15:51.336 11:29:56 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:51.336 11:29:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:15:51.336 11:29:56 event -- event/event.sh@51 -- # modprobe -n nbd 00:15:51.336 11:29:56 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:15:51.336 11:29:56 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:51.336 11:29:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:51.336 11:29:56 event -- common/autotest_common.sh@10 -- # set +x 00:15:51.336 ************************************ 00:15:51.336 START TEST app_repeat 00:15:51.336 ************************************ 00:15:51.336 11:29:56 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:15:51.336 11:29:56 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:51.336 11:29:56 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:51.336 11:29:56 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:15:51.336 11:29:56 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:51.336 11:29:56 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:15:51.336 11:29:56 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:15:51.336 11:29:56 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:15:51.336 11:29:56 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59474 00:15:51.336 Process app_repeat pid: 59474 00:15:51.336 11:29:56 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:15:51.336 11:29:56 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:15:51.336 11:29:56 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59474' 00:15:51.336 11:29:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:15:51.336 spdk_app_start Round 0 00:15:51.336 11:29:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:15:51.336 11:29:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59474 /var/tmp/spdk-nbd.sock 00:15:51.336 11:29:56 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59474 ']' 00:15:51.336 11:29:56 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:51.336 11:29:56 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:51.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:51.336 11:29:56 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:51.336 11:29:56 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:51.336 11:29:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:15:51.336 [2024-11-20 11:29:57.013963] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:15:51.336 [2024-11-20 11:29:57.014145] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59474 ] 00:15:51.595 [2024-11-20 11:29:57.200961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:51.595 [2024-11-20 11:29:57.338946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.595 [2024-11-20 11:29:57.338959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.532 11:29:58 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:52.532 11:29:58 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:15:52.532 11:29:58 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:52.791 Malloc0 00:15:52.791 11:29:58 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:53.050 Malloc1 00:15:53.050 11:29:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:53.050 11:29:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:53.050 11:29:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:53.050 11:29:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:53.050 11:29:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:53.050 11:29:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:53.050 11:29:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:53.050 11:29:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:53.050 11:29:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:53.050 11:29:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:53.050 11:29:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:53.050 11:29:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:53.050 11:29:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:15:53.050 11:29:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:53.050 11:29:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:53.050 11:29:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:15:53.309 /dev/nbd0 00:15:53.309 11:29:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:53.568 11:29:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:53.568 11:29:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:53.568 11:29:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:15:53.568 11:29:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:53.568 11:29:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:53.568 11:29:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:53.568 11:29:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:15:53.568 11:29:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:53.568 11:29:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:53.568 11:29:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:53.568 1+0 records in 00:15:53.568 1+0 records out 00:15:53.568 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278837 s, 14.7 MB/s 00:15:53.568 11:29:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:53.568 11:29:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:15:53.568 11:29:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:53.568 11:29:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:53.568 11:29:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:15:53.568 11:29:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:53.568 11:29:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:53.568 11:29:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:15:53.826 /dev/nbd1 00:15:53.826 11:29:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:53.826 11:29:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:53.826 11:29:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:53.826 11:29:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:15:53.826 11:29:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:53.826 11:29:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:53.826 11:29:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:53.826 11:29:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:15:53.826 11:29:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:53.826 11:29:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:53.826 11:29:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:53.826 1+0 records in 00:15:53.826 1+0 records out 00:15:53.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253165 s, 16.2 MB/s 00:15:53.826 11:29:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:53.826 11:29:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:15:53.826 11:29:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:53.826 11:29:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:53.826 11:29:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:15:53.826 11:29:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:53.826 11:29:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:53.826 11:29:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:53.826 11:29:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:53.826 11:29:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:54.084 11:29:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:54.084 { 00:15:54.084 "nbd_device": "/dev/nbd0", 00:15:54.084 "bdev_name": "Malloc0" 00:15:54.084 }, 00:15:54.084 { 00:15:54.084 "nbd_device": "/dev/nbd1", 00:15:54.084 "bdev_name": "Malloc1" 00:15:54.084 } 00:15:54.084 ]' 00:15:54.084 11:29:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:54.084 { 00:15:54.084 "nbd_device": "/dev/nbd0", 00:15:54.084 "bdev_name": "Malloc0" 00:15:54.084 }, 00:15:54.084 { 00:15:54.084 "nbd_device": "/dev/nbd1", 00:15:54.084 "bdev_name": "Malloc1" 00:15:54.084 } 00:15:54.084 ]' 00:15:54.084 11:29:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:54.084 11:29:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:54.084 /dev/nbd1' 00:15:54.084 11:29:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:54.084 11:29:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:54.084 /dev/nbd1' 00:15:54.084 11:29:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:15:54.084 11:29:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:15:54.084 11:29:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:15:54.084 11:29:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:15:54.084 11:29:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:15:54.084 11:29:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:54.084 11:29:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:54.084 11:29:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:54.084 11:29:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:54.084 11:29:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:54.084 11:29:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:15:54.084 256+0 records in 00:15:54.084 256+0 records out 00:15:54.084 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00729607 s, 144 MB/s 00:15:54.084 11:29:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:54.084 11:29:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:54.084 256+0 records in 00:15:54.084 256+0 records out 00:15:54.084 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0317181 s, 33.1 MB/s 00:15:54.084 11:29:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:54.084 11:29:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:54.342 256+0 records in 00:15:54.342 256+0 records out 00:15:54.342 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0339062 s, 30.9 MB/s 00:15:54.342 11:29:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:15:54.342 11:29:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:54.342 11:29:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:54.342 11:29:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:54.342 11:29:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:54.342 11:29:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:54.342 11:29:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:54.342 11:29:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:54.342 11:29:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:15:54.342 11:29:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:54.342 11:29:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:15:54.342 11:29:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:54.342 11:29:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:15:54.342 11:29:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:54.342 11:29:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:54.342 11:29:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:54.342 11:29:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:15:54.342 11:29:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:54.342 11:29:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:54.600 11:30:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:54.600 11:30:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:54.600 11:30:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:54.600 11:30:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:54.600 11:30:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:54.600 11:30:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:54.600 11:30:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:15:54.600 11:30:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:15:54.600 11:30:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:54.600 11:30:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:54.859 11:30:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:54.859 11:30:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:54.859 11:30:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:54.859 11:30:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:54.859 11:30:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:54.859 11:30:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:54.859 11:30:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:15:54.859 11:30:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:15:54.859 11:30:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:54.859 11:30:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:54.859 11:30:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:55.117 11:30:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:55.117 11:30:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:55.117 11:30:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:55.117 11:30:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:55.117 11:30:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:15:55.117 11:30:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:55.117 11:30:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:15:55.117 11:30:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:15:55.117 11:30:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:15:55.117 11:30:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:15:55.117 11:30:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:55.117 11:30:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:15:55.117 11:30:00 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:15:55.743 11:30:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:15:56.685 [2024-11-20 11:30:02.302993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:56.685 [2024-11-20 11:30:02.429338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.685 [2024-11-20 11:30:02.429343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.944 [2024-11-20 11:30:02.618817] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:15:56.944 [2024-11-20 11:30:02.618914] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:15:58.843 spdk_app_start Round 1 00:15:58.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:58.843 11:30:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:15:58.843 11:30:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:15:58.843 11:30:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59474 /var/tmp/spdk-nbd.sock 00:15:58.843 11:30:04 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59474 ']' 00:15:58.843 11:30:04 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:58.843 11:30:04 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:58.843 11:30:04 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:58.843 11:30:04 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:58.843 11:30:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:15:58.843 11:30:04 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:58.843 11:30:04 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:15:58.843 11:30:04 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:59.409 Malloc0 00:15:59.409 11:30:04 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:59.667 Malloc1 00:15:59.667 11:30:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:59.667 11:30:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:59.667 11:30:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:59.667 11:30:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:59.667 11:30:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:59.667 11:30:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:59.667 11:30:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:59.667 11:30:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:59.667 11:30:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:59.667 11:30:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:59.667 11:30:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:59.667 11:30:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:59.667 11:30:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:15:59.667 11:30:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:59.667 11:30:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:59.667 11:30:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:15:59.926 /dev/nbd0 00:15:59.926 11:30:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:59.926 11:30:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:59.926 11:30:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:59.926 11:30:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:15:59.926 11:30:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:59.926 11:30:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:59.926 11:30:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:59.926 11:30:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:15:59.926 11:30:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:59.926 11:30:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:59.926 11:30:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:59.926 1+0 records in 00:15:59.926 1+0 records out 00:15:59.926 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000792064 s, 5.2 MB/s 00:15:59.926 11:30:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:59.926 11:30:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:15:59.926 11:30:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:59.926 11:30:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:59.926 11:30:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:15:59.926 11:30:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:59.926 11:30:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:59.926 11:30:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:16:00.185 /dev/nbd1 00:16:00.185 11:30:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:00.185 11:30:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:00.185 11:30:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:00.185 11:30:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:16:00.185 11:30:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:00.185 11:30:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:00.185 11:30:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:00.185 11:30:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:16:00.185 11:30:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:00.185 11:30:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:00.185 11:30:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:00.185 1+0 records in 00:16:00.185 1+0 records out 00:16:00.185 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264295 s, 15.5 MB/s 00:16:00.185 11:30:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:00.185 11:30:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:16:00.185 11:30:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:00.185 11:30:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:00.185 11:30:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:16:00.185 11:30:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:00.185 11:30:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:00.185 11:30:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:00.185 11:30:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:00.185 11:30:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:00.443 11:30:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:00.443 { 00:16:00.443 "nbd_device": "/dev/nbd0", 00:16:00.443 "bdev_name": "Malloc0" 00:16:00.443 }, 00:16:00.443 { 00:16:00.443 "nbd_device": "/dev/nbd1", 00:16:00.443 "bdev_name": "Malloc1" 00:16:00.443 } 00:16:00.443 ]' 00:16:00.443 11:30:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:00.443 { 00:16:00.443 "nbd_device": "/dev/nbd0", 00:16:00.443 "bdev_name": "Malloc0" 00:16:00.443 }, 00:16:00.443 { 00:16:00.443 "nbd_device": "/dev/nbd1", 00:16:00.443 "bdev_name": "Malloc1" 00:16:00.443 } 00:16:00.443 ]' 00:16:00.443 11:30:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:00.702 11:30:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:00.702 /dev/nbd1' 00:16:00.702 11:30:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:00.702 /dev/nbd1' 00:16:00.702 11:30:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:00.702 11:30:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:16:00.702 11:30:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:16:00.702 11:30:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:16:00.702 11:30:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:16:00.702 11:30:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:16:00.702 11:30:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:00.702 11:30:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:00.702 11:30:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:00.702 11:30:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:00.702 11:30:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:00.702 11:30:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:16:00.702 256+0 records in 00:16:00.702 256+0 records out 00:16:00.702 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00940522 s, 111 MB/s 00:16:00.702 11:30:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:00.702 11:30:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:00.702 256+0 records in 00:16:00.702 256+0 records out 00:16:00.702 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0232496 s, 45.1 MB/s 00:16:00.702 11:30:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:00.702 11:30:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:00.702 256+0 records in 00:16:00.702 256+0 records out 00:16:00.702 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0372786 s, 28.1 MB/s 00:16:00.702 11:30:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:16:00.702 11:30:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:00.702 11:30:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:00.702 11:30:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:00.702 11:30:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:00.702 11:30:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:00.702 11:30:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:00.702 11:30:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:00.702 11:30:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:16:00.702 11:30:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:00.702 11:30:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:16:00.702 11:30:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:00.702 11:30:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:16:00.702 11:30:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:00.702 11:30:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:00.702 11:30:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:00.702 11:30:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:16:00.702 11:30:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:00.702 11:30:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:00.961 11:30:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:00.961 11:30:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:00.961 11:30:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:00.961 11:30:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:00.961 11:30:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:00.961 11:30:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:00.961 11:30:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:16:00.961 11:30:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:16:00.961 11:30:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:00.961 11:30:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:01.219 11:30:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:01.219 11:30:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:01.219 11:30:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:01.219 11:30:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:01.219 11:30:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:01.219 11:30:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:01.219 11:30:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:16:01.219 11:30:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:16:01.219 11:30:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:01.219 11:30:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:01.219 11:30:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:01.478 11:30:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:01.478 11:30:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:01.478 11:30:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:01.736 11:30:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:01.736 11:30:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:01.736 11:30:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:16:01.736 11:30:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:16:01.736 11:30:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:16:01.736 11:30:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:16:01.736 11:30:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:16:01.736 11:30:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:01.736 11:30:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:16:01.736 11:30:07 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:16:01.994 11:30:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:16:03.371 [2024-11-20 11:30:08.796121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:03.371 [2024-11-20 11:30:08.929788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.371 [2024-11-20 11:30:08.929790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.371 [2024-11-20 11:30:09.127391] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:16:03.371 [2024-11-20 11:30:09.127562] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:16:05.273 spdk_app_start Round 2 00:16:05.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:05.273 11:30:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:16:05.273 11:30:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:16:05.273 11:30:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59474 /var/tmp/spdk-nbd.sock 00:16:05.273 11:30:10 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59474 ']' 00:16:05.273 11:30:10 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:05.273 11:30:10 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:05.273 11:30:10 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:05.273 11:30:10 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:05.273 11:30:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:16:05.532 11:30:11 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:05.532 11:30:11 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:16:05.532 11:30:11 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:05.789 Malloc0 00:16:05.789 11:30:11 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:06.048 Malloc1 00:16:06.048 11:30:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:06.048 11:30:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:06.048 11:30:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:06.048 11:30:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:06.048 11:30:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:06.048 11:30:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:06.048 11:30:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:06.048 11:30:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:06.048 11:30:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:06.048 11:30:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:06.048 11:30:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:06.048 11:30:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:06.048 11:30:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:16:06.048 11:30:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:06.048 11:30:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:06.048 11:30:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:16:06.307 /dev/nbd0 00:16:06.565 11:30:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:06.565 11:30:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:06.565 11:30:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:06.565 11:30:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:16:06.565 11:30:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:06.565 11:30:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:06.565 11:30:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:06.565 11:30:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:16:06.565 11:30:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:06.565 11:30:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:06.565 11:30:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:06.565 1+0 records in 00:16:06.565 1+0 records out 00:16:06.565 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266414 s, 15.4 MB/s 00:16:06.565 11:30:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:06.565 11:30:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:16:06.565 11:30:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:06.565 11:30:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:06.565 11:30:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:16:06.565 11:30:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:06.565 11:30:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:06.565 11:30:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:16:06.824 /dev/nbd1 00:16:06.824 11:30:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:06.824 11:30:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:06.824 11:30:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:06.824 11:30:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:16:06.824 11:30:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:06.824 11:30:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:06.824 11:30:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:06.824 11:30:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:16:06.824 11:30:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:06.824 11:30:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:06.824 11:30:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:06.824 1+0 records in 00:16:06.824 1+0 records out 00:16:06.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352194 s, 11.6 MB/s 00:16:06.824 11:30:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:06.824 11:30:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:16:06.824 11:30:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:06.824 11:30:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:06.824 11:30:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:16:06.824 11:30:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:06.824 11:30:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:06.824 11:30:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:06.824 11:30:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:06.824 11:30:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:07.082 11:30:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:07.082 { 00:16:07.082 "nbd_device": "/dev/nbd0", 00:16:07.082 "bdev_name": "Malloc0" 00:16:07.082 }, 00:16:07.082 { 00:16:07.082 "nbd_device": "/dev/nbd1", 00:16:07.082 "bdev_name": "Malloc1" 00:16:07.082 } 00:16:07.082 ]' 00:16:07.082 11:30:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:07.082 { 00:16:07.082 "nbd_device": "/dev/nbd0", 00:16:07.082 "bdev_name": "Malloc0" 00:16:07.082 }, 00:16:07.082 { 00:16:07.082 "nbd_device": "/dev/nbd1", 00:16:07.082 "bdev_name": "Malloc1" 00:16:07.082 } 00:16:07.082 ]' 00:16:07.082 11:30:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:07.082 11:30:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:07.082 /dev/nbd1' 00:16:07.082 11:30:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:07.082 /dev/nbd1' 00:16:07.082 11:30:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:07.082 11:30:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:16:07.082 11:30:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:16:07.082 11:30:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:16:07.082 11:30:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:16:07.082 11:30:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:16:07.082 11:30:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:07.082 11:30:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:07.082 11:30:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:07.082 11:30:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:07.082 11:30:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:07.082 11:30:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:16:07.082 256+0 records in 00:16:07.082 256+0 records out 00:16:07.082 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00877761 s, 119 MB/s 00:16:07.082 11:30:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:07.082 11:30:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:07.082 256+0 records in 00:16:07.082 256+0 records out 00:16:07.082 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0311959 s, 33.6 MB/s 00:16:07.082 11:30:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:07.082 11:30:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:07.341 256+0 records in 00:16:07.341 256+0 records out 00:16:07.341 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0283982 s, 36.9 MB/s 00:16:07.341 11:30:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:16:07.341 11:30:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:07.341 11:30:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:07.341 11:30:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:07.341 11:30:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:07.341 11:30:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:07.341 11:30:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:07.341 11:30:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:07.342 11:30:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:16:07.342 11:30:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:07.342 11:30:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:16:07.342 11:30:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:07.342 11:30:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:16:07.342 11:30:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:07.342 11:30:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:07.342 11:30:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:07.342 11:30:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:16:07.342 11:30:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:07.342 11:30:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:07.601 11:30:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:07.601 11:30:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:07.601 11:30:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:07.601 11:30:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:07.601 11:30:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:07.601 11:30:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:07.601 11:30:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:16:07.601 11:30:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:16:07.601 11:30:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:07.601 11:30:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:07.860 11:30:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:07.860 11:30:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:07.860 11:30:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:07.860 11:30:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:07.860 11:30:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:07.860 11:30:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:07.860 11:30:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:16:07.860 11:30:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:16:07.860 11:30:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:07.860 11:30:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:07.860 11:30:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:08.428 11:30:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:08.428 11:30:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:08.428 11:30:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:08.428 11:30:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:08.428 11:30:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:16:08.428 11:30:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:08.428 11:30:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:16:08.428 11:30:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:16:08.428 11:30:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:16:08.428 11:30:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:16:08.428 11:30:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:08.428 11:30:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:16:08.428 11:30:13 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:16:08.687 11:30:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:16:10.103 [2024-11-20 11:30:15.483170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:10.103 [2024-11-20 11:30:15.608509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:10.103 [2024-11-20 11:30:15.608524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.103 [2024-11-20 11:30:15.797732] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:16:10.103 [2024-11-20 11:30:15.797824] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:16:12.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:12.004 11:30:17 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59474 /var/tmp/spdk-nbd.sock 00:16:12.004 11:30:17 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59474 ']' 00:16:12.004 11:30:17 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:12.004 11:30:17 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:12.004 11:30:17 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:12.004 11:30:17 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:12.004 11:30:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:16:12.004 11:30:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:12.004 11:30:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:16:12.004 11:30:17 event.app_repeat -- event/event.sh@39 -- # killprocess 59474 00:16:12.004 11:30:17 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59474 ']' 00:16:12.004 11:30:17 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59474 00:16:12.004 11:30:17 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:16:12.004 11:30:17 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:12.004 11:30:17 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59474 00:16:12.262 killing process with pid 59474 00:16:12.262 11:30:17 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:12.262 11:30:17 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:12.262 11:30:17 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59474' 00:16:12.262 11:30:17 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59474 00:16:12.262 11:30:17 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59474 00:16:13.196 spdk_app_start is called in Round 0. 00:16:13.196 Shutdown signal received, stop current app iteration 00:16:13.196 Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 reinitialization... 00:16:13.196 spdk_app_start is called in Round 1. 00:16:13.196 Shutdown signal received, stop current app iteration 00:16:13.196 Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 reinitialization... 00:16:13.196 spdk_app_start is called in Round 2. 00:16:13.196 Shutdown signal received, stop current app iteration 00:16:13.196 Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 reinitialization... 00:16:13.196 spdk_app_start is called in Round 3. 00:16:13.196 Shutdown signal received, stop current app iteration 00:16:13.196 11:30:18 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:16:13.196 11:30:18 event.app_repeat -- event/event.sh@42 -- # return 0 00:16:13.196 00:16:13.196 real 0m21.776s 00:16:13.196 user 0m48.350s 00:16:13.196 sys 0m3.014s 00:16:13.196 ************************************ 00:16:13.196 END TEST app_repeat 00:16:13.196 ************************************ 00:16:13.196 11:30:18 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:13.196 11:30:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:16:13.196 11:30:18 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:16:13.196 11:30:18 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:16:13.196 11:30:18 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:13.196 11:30:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:13.196 11:30:18 event -- common/autotest_common.sh@10 -- # set +x 00:16:13.196 ************************************ 00:16:13.196 START TEST cpu_locks 00:16:13.196 ************************************ 00:16:13.196 11:30:18 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:16:13.196 * Looking for test storage... 00:16:13.196 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:16:13.196 11:30:18 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:13.196 11:30:18 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:16:13.196 11:30:18 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:13.196 11:30:18 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:13.196 11:30:18 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:13.196 11:30:18 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:13.196 11:30:18 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:13.196 11:30:18 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:16:13.196 11:30:18 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:16:13.196 11:30:18 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:16:13.196 11:30:18 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:16:13.196 11:30:18 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:16:13.196 11:30:18 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:16:13.196 11:30:18 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:16:13.196 11:30:18 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:13.196 11:30:18 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:16:13.196 11:30:18 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:16:13.196 11:30:18 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:13.196 11:30:18 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:13.196 11:30:18 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:16:13.196 11:30:18 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:16:13.196 11:30:18 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:13.196 11:30:18 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:16:13.454 11:30:18 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:16:13.454 11:30:18 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:16:13.454 11:30:18 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:16:13.454 11:30:18 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:13.454 11:30:18 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:16:13.454 11:30:18 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:16:13.454 11:30:18 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:13.454 11:30:18 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:13.454 11:30:18 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:16:13.454 11:30:18 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:13.454 11:30:18 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:13.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.454 --rc genhtml_branch_coverage=1 00:16:13.454 --rc genhtml_function_coverage=1 00:16:13.454 --rc genhtml_legend=1 00:16:13.454 --rc geninfo_all_blocks=1 00:16:13.454 --rc geninfo_unexecuted_blocks=1 00:16:13.454 00:16:13.454 ' 00:16:13.454 11:30:18 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:13.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.454 --rc genhtml_branch_coverage=1 00:16:13.454 --rc genhtml_function_coverage=1 00:16:13.454 --rc genhtml_legend=1 00:16:13.454 --rc geninfo_all_blocks=1 00:16:13.454 --rc geninfo_unexecuted_blocks=1 00:16:13.454 00:16:13.454 ' 00:16:13.454 11:30:18 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:13.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.454 --rc genhtml_branch_coverage=1 00:16:13.454 --rc genhtml_function_coverage=1 00:16:13.454 --rc genhtml_legend=1 00:16:13.454 --rc geninfo_all_blocks=1 00:16:13.454 --rc geninfo_unexecuted_blocks=1 00:16:13.454 00:16:13.454 ' 00:16:13.454 11:30:18 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:13.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.454 --rc genhtml_branch_coverage=1 00:16:13.454 --rc genhtml_function_coverage=1 00:16:13.454 --rc genhtml_legend=1 00:16:13.454 --rc geninfo_all_blocks=1 00:16:13.454 --rc geninfo_unexecuted_blocks=1 00:16:13.454 00:16:13.454 ' 00:16:13.454 11:30:18 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:16:13.454 11:30:18 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:16:13.454 11:30:18 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:16:13.454 11:30:18 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:16:13.454 11:30:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:13.454 11:30:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:13.454 11:30:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:13.454 ************************************ 00:16:13.454 START TEST default_locks 00:16:13.454 ************************************ 00:16:13.454 11:30:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:16:13.454 11:30:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59945 00:16:13.454 11:30:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59945 00:16:13.454 11:30:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:16:13.454 11:30:18 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59945 ']' 00:16:13.454 11:30:18 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.454 11:30:18 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:13.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.454 11:30:18 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.454 11:30:18 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:13.454 11:30:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:16:13.454 [2024-11-20 11:30:19.099208] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:16:13.454 [2024-11-20 11:30:19.099868] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59945 ] 00:16:13.712 [2024-11-20 11:30:19.283833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.712 [2024-11-20 11:30:19.436566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.646 11:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:14.646 11:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:16:14.646 11:30:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59945 00:16:14.646 11:30:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59945 00:16:14.646 11:30:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:15.212 11:30:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59945 00:16:15.212 11:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59945 ']' 00:16:15.212 11:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59945 00:16:15.212 11:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:16:15.212 11:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:15.212 11:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59945 00:16:15.212 11:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:15.212 killing process with pid 59945 00:16:15.212 11:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:15.212 11:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59945' 00:16:15.212 11:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59945 00:16:15.212 11:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59945 00:16:17.735 11:30:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59945 00:16:17.735 11:30:22 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:16:17.735 11:30:22 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59945 00:16:17.735 11:30:22 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:16:17.735 11:30:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:17.735 11:30:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:16:17.735 11:30:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:17.735 11:30:22 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59945 00:16:17.735 11:30:22 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59945 ']' 00:16:17.735 11:30:22 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.735 11:30:22 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:17.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.735 11:30:22 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.735 11:30:22 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:17.736 11:30:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:16:17.736 ERROR: process (pid: 59945) is no longer running 00:16:17.736 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59945) - No such process 00:16:17.736 11:30:22 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:17.736 11:30:22 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:16:17.736 11:30:22 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:16:17.736 11:30:22 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:17.736 11:30:22 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:17.736 11:30:22 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:17.736 11:30:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:16:17.736 11:30:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:16:17.736 11:30:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:16:17.736 11:30:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:16:17.736 00:16:17.736 real 0m3.974s 00:16:17.736 user 0m4.014s 00:16:17.736 sys 0m0.713s 00:16:17.736 11:30:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:17.736 11:30:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:16:17.736 ************************************ 00:16:17.736 END TEST default_locks 00:16:17.736 ************************************ 00:16:17.736 11:30:22 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:16:17.736 11:30:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:17.736 11:30:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:17.736 11:30:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:17.736 ************************************ 00:16:17.736 START TEST default_locks_via_rpc 00:16:17.736 ************************************ 00:16:17.736 11:30:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:16:17.736 11:30:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60020 00:16:17.736 11:30:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60020 00:16:17.736 11:30:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:16:17.736 11:30:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60020 ']' 00:16:17.736 11:30:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.736 11:30:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:17.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.736 11:30:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.736 11:30:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:17.736 11:30:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.736 [2024-11-20 11:30:23.107564] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:16:17.736 [2024-11-20 11:30:23.107702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60020 ] 00:16:17.736 [2024-11-20 11:30:23.283925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.736 [2024-11-20 11:30:23.413998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.670 11:30:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:18.670 11:30:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:18.670 11:30:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:16:18.670 11:30:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.670 11:30:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.670 11:30:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.670 11:30:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:16:18.670 11:30:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:16:18.670 11:30:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:16:18.670 11:30:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:16:18.670 11:30:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:16:18.670 11:30:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.670 11:30:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.670 11:30:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.670 11:30:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60020 00:16:18.670 11:30:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60020 00:16:18.670 11:30:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:19.238 11:30:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60020 00:16:19.238 11:30:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60020 ']' 00:16:19.238 11:30:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60020 00:16:19.238 11:30:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:16:19.238 11:30:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:19.238 11:30:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60020 00:16:19.238 11:30:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:19.238 killing process with pid 60020 00:16:19.238 11:30:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:19.238 11:30:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60020' 00:16:19.238 11:30:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60020 00:16:19.238 11:30:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60020 00:16:21.772 00:16:21.772 real 0m3.950s 00:16:21.772 user 0m4.002s 00:16:21.772 sys 0m0.709s 00:16:21.772 11:30:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:21.772 11:30:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.772 ************************************ 00:16:21.772 END TEST default_locks_via_rpc 00:16:21.772 ************************************ 00:16:21.772 11:30:26 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:16:21.772 11:30:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:21.772 11:30:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:21.772 11:30:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:21.772 ************************************ 00:16:21.772 START TEST non_locking_app_on_locked_coremask 00:16:21.772 ************************************ 00:16:21.772 11:30:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:16:21.772 11:30:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60096 00:16:21.772 11:30:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60096 /var/tmp/spdk.sock 00:16:21.772 11:30:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60096 ']' 00:16:21.772 11:30:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:16:21.772 11:30:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.772 11:30:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:21.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.772 11:30:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.773 11:30:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:21.773 11:30:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:21.773 [2024-11-20 11:30:27.121597] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:16:21.773 [2024-11-20 11:30:27.121790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60096 ] 00:16:21.773 [2024-11-20 11:30:27.311332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.773 [2024-11-20 11:30:27.486175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.706 11:30:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:22.706 11:30:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:16:22.706 11:30:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60118 00:16:22.706 11:30:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:16:22.706 11:30:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60118 /var/tmp/spdk2.sock 00:16:22.706 11:30:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60118 ']' 00:16:22.706 11:30:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:22.706 11:30:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:22.706 11:30:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:22.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:22.706 11:30:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:22.706 11:30:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:22.964 [2024-11-20 11:30:28.527719] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:16:22.964 [2024-11-20 11:30:28.527875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60118 ] 00:16:22.964 [2024-11-20 11:30:28.726589] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:16:22.964 [2024-11-20 11:30:28.726675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.530 [2024-11-20 11:30:28.990142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.064 11:30:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:26.064 11:30:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:16:26.064 11:30:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60096 00:16:26.064 11:30:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60096 00:16:26.064 11:30:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:26.631 11:30:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60096 00:16:26.631 11:30:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60096 ']' 00:16:26.631 11:30:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60096 00:16:26.631 11:30:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:16:26.631 11:30:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:26.631 11:30:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60096 00:16:26.631 killing process with pid 60096 00:16:26.631 11:30:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:26.631 11:30:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:26.631 11:30:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60096' 00:16:26.631 11:30:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60096 00:16:26.631 11:30:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60096 00:16:31.902 11:30:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60118 00:16:31.902 11:30:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60118 ']' 00:16:31.902 11:30:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60118 00:16:31.902 11:30:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:16:31.902 11:30:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:31.902 11:30:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60118 00:16:31.902 killing process with pid 60118 00:16:31.902 11:30:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:31.902 11:30:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:31.902 11:30:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60118' 00:16:31.902 11:30:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60118 00:16:31.902 11:30:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60118 00:16:33.282 00:16:33.282 real 0m12.049s 00:16:33.282 user 0m12.649s 00:16:33.282 sys 0m1.617s 00:16:33.282 11:30:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:33.282 ************************************ 00:16:33.282 END TEST non_locking_app_on_locked_coremask 00:16:33.282 11:30:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:33.282 ************************************ 00:16:33.541 11:30:39 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:16:33.541 11:30:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:33.541 11:30:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:33.541 11:30:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:33.541 ************************************ 00:16:33.541 START TEST locking_app_on_unlocked_coremask 00:16:33.541 ************************************ 00:16:33.541 11:30:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:16:33.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.541 11:30:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60272 00:16:33.541 11:30:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60272 /var/tmp/spdk.sock 00:16:33.541 11:30:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:16:33.541 11:30:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60272 ']' 00:16:33.541 11:30:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.541 11:30:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:33.541 11:30:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.541 11:30:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:33.541 11:30:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:33.541 [2024-11-20 11:30:39.228334] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:16:33.541 [2024-11-20 11:30:39.228829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60272 ] 00:16:33.799 [2024-11-20 11:30:39.425958] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:16:33.799 [2024-11-20 11:30:39.426437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.059 [2024-11-20 11:30:39.588788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.995 11:30:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:34.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:34.995 11:30:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:16:34.995 11:30:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60294 00:16:34.995 11:30:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60294 /var/tmp/spdk2.sock 00:16:34.995 11:30:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:16:34.995 11:30:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60294 ']' 00:16:34.995 11:30:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:34.995 11:30:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:34.995 11:30:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:34.995 11:30:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:34.996 11:30:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:34.996 [2024-11-20 11:30:40.683061] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:16:34.996 [2024-11-20 11:30:40.684083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60294 ] 00:16:35.254 [2024-11-20 11:30:40.890667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.512 [2024-11-20 11:30:41.160269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.058 11:30:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:38.058 11:30:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:16:38.058 11:30:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60294 00:16:38.058 11:30:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:38.058 11:30:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60294 00:16:38.995 11:30:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60272 00:16:38.995 11:30:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60272 ']' 00:16:38.995 11:30:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60272 00:16:38.995 11:30:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:16:38.995 11:30:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:38.995 11:30:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60272 00:16:38.995 killing process with pid 60272 00:16:38.995 11:30:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:38.995 11:30:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:38.995 11:30:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60272' 00:16:38.995 11:30:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60272 00:16:38.995 11:30:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60272 00:16:43.202 11:30:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60294 00:16:43.202 11:30:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60294 ']' 00:16:43.202 11:30:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60294 00:16:43.202 11:30:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:16:43.202 11:30:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:43.202 11:30:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60294 00:16:43.202 killing process with pid 60294 00:16:43.202 11:30:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:43.202 11:30:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:43.202 11:30:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60294' 00:16:43.202 11:30:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60294 00:16:43.202 11:30:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60294 00:16:45.736 ************************************ 00:16:45.736 END TEST locking_app_on_unlocked_coremask 00:16:45.736 ************************************ 00:16:45.736 00:16:45.736 real 0m12.010s 00:16:45.736 user 0m12.774s 00:16:45.736 sys 0m1.567s 00:16:45.736 11:30:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:45.736 11:30:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:45.736 11:30:51 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:16:45.736 11:30:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:45.736 11:30:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:45.736 11:30:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:45.736 ************************************ 00:16:45.736 START TEST locking_app_on_locked_coremask 00:16:45.736 ************************************ 00:16:45.736 11:30:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:16:45.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.736 11:30:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60442 00:16:45.736 11:30:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60442 /var/tmp/spdk.sock 00:16:45.736 11:30:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:16:45.736 11:30:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60442 ']' 00:16:45.736 11:30:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.736 11:30:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:45.736 11:30:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.736 11:30:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:45.736 11:30:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:45.736 [2024-11-20 11:30:51.288358] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:16:45.736 [2024-11-20 11:30:51.289152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60442 ] 00:16:45.736 [2024-11-20 11:30:51.476132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.995 [2024-11-20 11:30:51.606470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.931 11:30:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:46.931 11:30:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:16:46.931 11:30:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60458 00:16:46.931 11:30:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:16:46.931 11:30:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60458 /var/tmp/spdk2.sock 00:16:46.931 11:30:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:16:46.931 11:30:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60458 /var/tmp/spdk2.sock 00:16:46.931 11:30:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:16:46.931 11:30:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:46.931 11:30:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:16:46.931 11:30:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:46.931 11:30:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60458 /var/tmp/spdk2.sock 00:16:46.931 11:30:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60458 ']' 00:16:46.931 11:30:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:46.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:46.931 11:30:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:46.931 11:30:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:46.931 11:30:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:46.931 11:30:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:46.931 [2024-11-20 11:30:52.597246] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:16:46.931 [2024-11-20 11:30:52.597437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60458 ] 00:16:47.229 [2024-11-20 11:30:52.801078] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60442 has claimed it. 00:16:47.229 [2024-11-20 11:30:52.801220] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:16:47.508 ERROR: process (pid: 60458) is no longer running 00:16:47.508 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60458) - No such process 00:16:47.508 11:30:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:47.508 11:30:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:16:47.508 11:30:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:16:47.508 11:30:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:47.508 11:30:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:47.508 11:30:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:47.508 11:30:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60442 00:16:47.508 11:30:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60442 00:16:47.508 11:30:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:48.075 11:30:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60442 00:16:48.075 11:30:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60442 ']' 00:16:48.075 11:30:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60442 00:16:48.075 11:30:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:16:48.075 11:30:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:48.075 11:30:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60442 00:16:48.075 killing process with pid 60442 00:16:48.075 11:30:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:48.075 11:30:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:48.075 11:30:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60442' 00:16:48.075 11:30:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60442 00:16:48.075 11:30:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60442 00:16:50.607 ************************************ 00:16:50.607 END TEST locking_app_on_locked_coremask 00:16:50.607 ************************************ 00:16:50.607 00:16:50.607 real 0m4.608s 00:16:50.608 user 0m4.967s 00:16:50.608 sys 0m0.868s 00:16:50.608 11:30:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:50.608 11:30:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:50.608 11:30:55 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:16:50.608 11:30:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:50.608 11:30:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:50.608 11:30:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:50.608 ************************************ 00:16:50.608 START TEST locking_overlapped_coremask 00:16:50.608 ************************************ 00:16:50.608 11:30:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:16:50.608 11:30:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60534 00:16:50.608 11:30:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60534 /var/tmp/spdk.sock 00:16:50.608 11:30:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:50.608 11:30:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60534 ']' 00:16:50.608 11:30:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.608 11:30:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:50.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.608 11:30:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.608 11:30:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:50.608 11:30:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:50.608 [2024-11-20 11:30:55.947829] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:16:50.608 [2024-11-20 11:30:55.948024] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60534 ] 00:16:50.608 [2024-11-20 11:30:56.135066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:50.608 [2024-11-20 11:30:56.268816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:50.608 [2024-11-20 11:30:56.268925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.608 [2024-11-20 11:30:56.268942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:51.543 11:30:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:51.543 11:30:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:16:51.543 11:30:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60553 00:16:51.543 11:30:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60553 /var/tmp/spdk2.sock 00:16:51.543 11:30:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:16:51.543 11:30:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60553 /var/tmp/spdk2.sock 00:16:51.543 11:30:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:16:51.543 11:30:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:16:51.543 11:30:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:51.543 11:30:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:16:51.543 11:30:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:51.543 11:30:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60553 /var/tmp/spdk2.sock 00:16:51.543 11:30:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60553 ']' 00:16:51.543 11:30:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:51.543 11:30:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:51.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:51.543 11:30:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:51.543 11:30:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:51.543 11:30:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:51.543 [2024-11-20 11:30:57.240132] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:16:51.543 [2024-11-20 11:30:57.240859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60553 ] 00:16:51.801 [2024-11-20 11:30:57.445228] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60534 has claimed it. 00:16:51.801 [2024-11-20 11:30:57.445311] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:16:52.368 ERROR: process (pid: 60553) is no longer running 00:16:52.368 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60553) - No such process 00:16:52.368 11:30:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:52.368 11:30:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:16:52.368 11:30:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:16:52.368 11:30:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:52.368 11:30:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:52.368 11:30:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:52.368 11:30:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:16:52.368 11:30:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:16:52.368 11:30:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:16:52.368 11:30:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:16:52.368 11:30:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60534 00:16:52.368 11:30:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60534 ']' 00:16:52.368 11:30:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60534 00:16:52.368 11:30:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:16:52.368 11:30:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:52.368 11:30:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60534 00:16:52.368 killing process with pid 60534 00:16:52.368 11:30:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:52.368 11:30:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:52.368 11:30:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60534' 00:16:52.368 11:30:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60534 00:16:52.368 11:30:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60534 00:16:54.956 00:16:54.956 real 0m4.305s 00:16:54.956 user 0m11.599s 00:16:54.956 sys 0m0.688s 00:16:54.956 ************************************ 00:16:54.956 END TEST locking_overlapped_coremask 00:16:54.956 ************************************ 00:16:54.956 11:31:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:54.956 11:31:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:54.956 11:31:00 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:16:54.956 11:31:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:54.956 11:31:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:54.956 11:31:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:54.956 ************************************ 00:16:54.956 START TEST locking_overlapped_coremask_via_rpc 00:16:54.956 ************************************ 00:16:54.956 11:31:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:16:54.956 11:31:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60617 00:16:54.956 11:31:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60617 /var/tmp/spdk.sock 00:16:54.957 11:31:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:16:54.957 11:31:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60617 ']' 00:16:54.957 11:31:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.957 11:31:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:54.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.957 11:31:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.957 11:31:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:54.957 11:31:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.957 [2024-11-20 11:31:00.302054] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:16:54.957 [2024-11-20 11:31:00.303365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60617 ] 00:16:54.957 [2024-11-20 11:31:00.487784] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:16:54.957 [2024-11-20 11:31:00.487850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:54.957 [2024-11-20 11:31:00.620962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.957 [2024-11-20 11:31:00.621067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.957 [2024-11-20 11:31:00.621081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:55.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:55.892 11:31:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:55.892 11:31:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:55.892 11:31:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60635 00:16:55.892 11:31:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60635 /var/tmp/spdk2.sock 00:16:55.892 11:31:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:16:55.892 11:31:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60635 ']' 00:16:55.892 11:31:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:55.892 11:31:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:55.892 11:31:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:55.892 11:31:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:55.892 11:31:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.892 [2024-11-20 11:31:01.605339] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:16:55.892 [2024-11-20 11:31:01.605825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60635 ] 00:16:56.150 [2024-11-20 11:31:01.812732] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:16:56.150 [2024-11-20 11:31:01.812841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:56.409 [2024-11-20 11:31:02.092626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:56.409 [2024-11-20 11:31:02.092715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:56.409 [2024-11-20 11:31:02.092728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:16:58.940 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:58.940 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:58.940 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:16:58.940 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.940 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.940 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.940 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:16:58.940 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:58.940 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:16:58.940 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:58.940 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:58.940 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:58.940 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:58.940 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:16:58.940 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.940 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.940 [2024-11-20 11:31:04.419741] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60617 has claimed it. 00:16:58.940 request: 00:16:58.940 { 00:16:58.940 "method": "framework_enable_cpumask_locks", 00:16:58.940 "req_id": 1 00:16:58.940 } 00:16:58.940 Got JSON-RPC error response 00:16:58.940 response: 00:16:58.940 { 00:16:58.940 "code": -32603, 00:16:58.940 "message": "Failed to claim CPU core: 2" 00:16:58.940 } 00:16:58.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.940 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:58.940 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:58.940 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:58.940 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:58.940 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:58.940 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60617 /var/tmp/spdk.sock 00:16:58.940 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60617 ']' 00:16:58.940 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.940 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:58.940 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.940 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:58.940 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.940 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:58.940 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:58.940 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60635 /var/tmp/spdk2.sock 00:16:58.940 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60635 ']' 00:16:58.940 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:58.940 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:58.940 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:58.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:58.940 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:58.940 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.506 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:59.506 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:59.506 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:16:59.506 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:16:59.506 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:16:59.506 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:16:59.506 00:16:59.506 real 0m4.814s 00:16:59.506 user 0m1.758s 00:16:59.506 sys 0m0.228s 00:16:59.506 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:59.506 11:31:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.506 ************************************ 00:16:59.506 END TEST locking_overlapped_coremask_via_rpc 00:16:59.506 ************************************ 00:16:59.506 11:31:05 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:16:59.506 11:31:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60617 ]] 00:16:59.506 11:31:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60617 00:16:59.506 11:31:05 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60617 ']' 00:16:59.506 11:31:05 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60617 00:16:59.506 11:31:05 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:16:59.506 11:31:05 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:59.506 11:31:05 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60617 00:16:59.506 killing process with pid 60617 00:16:59.506 11:31:05 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:59.506 11:31:05 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:59.506 11:31:05 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60617' 00:16:59.507 11:31:05 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60617 00:16:59.507 11:31:05 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60617 00:17:02.037 11:31:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60635 ]] 00:17:02.037 11:31:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60635 00:17:02.037 11:31:07 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60635 ']' 00:17:02.037 11:31:07 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60635 00:17:02.037 11:31:07 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:17:02.037 11:31:07 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:02.037 11:31:07 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60635 00:17:02.037 killing process with pid 60635 00:17:02.037 11:31:07 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:02.037 11:31:07 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:02.037 11:31:07 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60635' 00:17:02.037 11:31:07 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60635 00:17:02.037 11:31:07 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60635 00:17:03.937 11:31:09 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:17:03.937 11:31:09 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:17:03.937 11:31:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60617 ]] 00:17:03.937 11:31:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60617 00:17:03.937 11:31:09 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60617 ']' 00:17:03.937 11:31:09 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60617 00:17:03.937 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60617) - No such process 00:17:03.937 Process with pid 60617 is not found 00:17:03.937 11:31:09 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60617 is not found' 00:17:03.938 Process with pid 60635 is not found 00:17:03.938 11:31:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60635 ]] 00:17:03.938 11:31:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60635 00:17:03.938 11:31:09 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60635 ']' 00:17:03.938 11:31:09 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60635 00:17:03.938 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60635) - No such process 00:17:03.938 11:31:09 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60635 is not found' 00:17:03.938 11:31:09 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:17:03.938 ************************************ 00:17:03.938 END TEST cpu_locks 00:17:03.938 ************************************ 00:17:03.938 00:17:03.938 real 0m50.693s 00:17:03.938 user 1m27.473s 00:17:03.938 sys 0m7.563s 00:17:03.938 11:31:09 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:03.938 11:31:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:17:03.938 ************************************ 00:17:03.938 END TEST event 00:17:03.938 ************************************ 00:17:03.938 00:17:03.938 real 1m23.321s 00:17:03.938 user 2m32.346s 00:17:03.938 sys 0m11.679s 00:17:03.938 11:31:09 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:03.938 11:31:09 event -- common/autotest_common.sh@10 -- # set +x 00:17:03.938 11:31:09 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:17:03.938 11:31:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:03.938 11:31:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:03.938 11:31:09 -- common/autotest_common.sh@10 -- # set +x 00:17:03.938 ************************************ 00:17:03.938 START TEST thread 00:17:03.938 ************************************ 00:17:03.938 11:31:09 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:17:03.938 * Looking for test storage... 00:17:03.938 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:17:03.938 11:31:09 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:03.938 11:31:09 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:17:03.938 11:31:09 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:04.195 11:31:09 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:04.195 11:31:09 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:04.195 11:31:09 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:04.195 11:31:09 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:04.195 11:31:09 thread -- scripts/common.sh@336 -- # IFS=.-: 00:17:04.195 11:31:09 thread -- scripts/common.sh@336 -- # read -ra ver1 00:17:04.195 11:31:09 thread -- scripts/common.sh@337 -- # IFS=.-: 00:17:04.195 11:31:09 thread -- scripts/common.sh@337 -- # read -ra ver2 00:17:04.195 11:31:09 thread -- scripts/common.sh@338 -- # local 'op=<' 00:17:04.195 11:31:09 thread -- scripts/common.sh@340 -- # ver1_l=2 00:17:04.195 11:31:09 thread -- scripts/common.sh@341 -- # ver2_l=1 00:17:04.195 11:31:09 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:04.195 11:31:09 thread -- scripts/common.sh@344 -- # case "$op" in 00:17:04.195 11:31:09 thread -- scripts/common.sh@345 -- # : 1 00:17:04.195 11:31:09 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:04.195 11:31:09 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:04.195 11:31:09 thread -- scripts/common.sh@365 -- # decimal 1 00:17:04.195 11:31:09 thread -- scripts/common.sh@353 -- # local d=1 00:17:04.195 11:31:09 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:04.195 11:31:09 thread -- scripts/common.sh@355 -- # echo 1 00:17:04.195 11:31:09 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:17:04.195 11:31:09 thread -- scripts/common.sh@366 -- # decimal 2 00:17:04.195 11:31:09 thread -- scripts/common.sh@353 -- # local d=2 00:17:04.195 11:31:09 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:04.195 11:31:09 thread -- scripts/common.sh@355 -- # echo 2 00:17:04.195 11:31:09 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:17:04.195 11:31:09 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:04.195 11:31:09 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:04.195 11:31:09 thread -- scripts/common.sh@368 -- # return 0 00:17:04.195 11:31:09 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:04.195 11:31:09 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:04.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.195 --rc genhtml_branch_coverage=1 00:17:04.195 --rc genhtml_function_coverage=1 00:17:04.195 --rc genhtml_legend=1 00:17:04.195 --rc geninfo_all_blocks=1 00:17:04.195 --rc geninfo_unexecuted_blocks=1 00:17:04.195 00:17:04.195 ' 00:17:04.195 11:31:09 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:04.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.195 --rc genhtml_branch_coverage=1 00:17:04.195 --rc genhtml_function_coverage=1 00:17:04.195 --rc genhtml_legend=1 00:17:04.195 --rc geninfo_all_blocks=1 00:17:04.195 --rc geninfo_unexecuted_blocks=1 00:17:04.195 00:17:04.195 ' 00:17:04.195 11:31:09 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:04.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.195 --rc genhtml_branch_coverage=1 00:17:04.195 --rc genhtml_function_coverage=1 00:17:04.195 --rc genhtml_legend=1 00:17:04.195 --rc geninfo_all_blocks=1 00:17:04.195 --rc geninfo_unexecuted_blocks=1 00:17:04.195 00:17:04.195 ' 00:17:04.195 11:31:09 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:04.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.195 --rc genhtml_branch_coverage=1 00:17:04.195 --rc genhtml_function_coverage=1 00:17:04.195 --rc genhtml_legend=1 00:17:04.195 --rc geninfo_all_blocks=1 00:17:04.195 --rc geninfo_unexecuted_blocks=1 00:17:04.195 00:17:04.195 ' 00:17:04.195 11:31:09 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:17:04.195 11:31:09 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:17:04.195 11:31:09 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:04.195 11:31:09 thread -- common/autotest_common.sh@10 -- # set +x 00:17:04.195 ************************************ 00:17:04.195 START TEST thread_poller_perf 00:17:04.195 ************************************ 00:17:04.195 11:31:09 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:17:04.195 [2024-11-20 11:31:09.810010] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:17:04.195 [2024-11-20 11:31:09.810167] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60830 ] 00:17:04.453 [2024-11-20 11:31:10.004233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.453 [2024-11-20 11:31:10.157616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.453 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:17:05.828 [2024-11-20T11:31:11.594Z] ====================================== 00:17:05.828 [2024-11-20T11:31:11.594Z] busy:2212637477 (cyc) 00:17:05.828 [2024-11-20T11:31:11.594Z] total_run_count: 297000 00:17:05.828 [2024-11-20T11:31:11.594Z] tsc_hz: 2200000000 (cyc) 00:17:05.828 [2024-11-20T11:31:11.594Z] ====================================== 00:17:05.828 [2024-11-20T11:31:11.594Z] poller_cost: 7449 (cyc), 3385 (nsec) 00:17:05.828 00:17:05.828 real 0m1.631s 00:17:05.828 user 0m1.419s 00:17:05.828 sys 0m0.101s 00:17:05.828 11:31:11 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:05.828 11:31:11 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:17:05.828 ************************************ 00:17:05.828 END TEST thread_poller_perf 00:17:05.828 ************************************ 00:17:05.828 11:31:11 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:17:05.828 11:31:11 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:17:05.828 11:31:11 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:05.828 11:31:11 thread -- common/autotest_common.sh@10 -- # set +x 00:17:05.828 ************************************ 00:17:05.828 START TEST thread_poller_perf 00:17:05.828 ************************************ 00:17:05.828 11:31:11 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:17:05.828 [2024-11-20 11:31:11.499176] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:17:05.828 [2024-11-20 11:31:11.499527] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60867 ] 00:17:06.086 [2024-11-20 11:31:11.681710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.086 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:17:06.086 [2024-11-20 11:31:11.800734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.463 [2024-11-20T11:31:13.229Z] ====================================== 00:17:07.463 [2024-11-20T11:31:13.229Z] busy:2203930770 (cyc) 00:17:07.463 [2024-11-20T11:31:13.229Z] total_run_count: 3680000 00:17:07.463 [2024-11-20T11:31:13.229Z] tsc_hz: 2200000000 (cyc) 00:17:07.463 [2024-11-20T11:31:13.229Z] ====================================== 00:17:07.463 [2024-11-20T11:31:13.229Z] poller_cost: 598 (cyc), 271 (nsec) 00:17:07.463 ************************************ 00:17:07.463 END TEST thread_poller_perf 00:17:07.463 ************************************ 00:17:07.463 00:17:07.463 real 0m1.570s 00:17:07.463 user 0m1.364s 00:17:07.463 sys 0m0.097s 00:17:07.463 11:31:13 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:07.463 11:31:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:17:07.463 11:31:13 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:17:07.463 ************************************ 00:17:07.463 END TEST thread 00:17:07.463 ************************************ 00:17:07.463 00:17:07.463 real 0m3.506s 00:17:07.463 user 0m2.930s 00:17:07.463 sys 0m0.353s 00:17:07.463 11:31:13 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:07.463 11:31:13 thread -- common/autotest_common.sh@10 -- # set +x 00:17:07.463 11:31:13 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:17:07.463 11:31:13 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:17:07.463 11:31:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:07.463 11:31:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:07.463 11:31:13 -- common/autotest_common.sh@10 -- # set +x 00:17:07.463 ************************************ 00:17:07.463 START TEST app_cmdline 00:17:07.463 ************************************ 00:17:07.463 11:31:13 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:17:07.463 * Looking for test storage... 00:17:07.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:17:07.463 11:31:13 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:07.463 11:31:13 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:07.463 11:31:13 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:17:07.722 11:31:13 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:07.722 11:31:13 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:07.722 11:31:13 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:07.722 11:31:13 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:07.722 11:31:13 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:17:07.722 11:31:13 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:17:07.722 11:31:13 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:17:07.722 11:31:13 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:17:07.722 11:31:13 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:17:07.722 11:31:13 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:17:07.722 11:31:13 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:17:07.722 11:31:13 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:07.722 11:31:13 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:17:07.722 11:31:13 app_cmdline -- scripts/common.sh@345 -- # : 1 00:17:07.722 11:31:13 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:07.722 11:31:13 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:07.722 11:31:13 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:17:07.722 11:31:13 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:17:07.722 11:31:13 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:07.722 11:31:13 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:17:07.722 11:31:13 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:17:07.722 11:31:13 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:17:07.722 11:31:13 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:17:07.722 11:31:13 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:07.722 11:31:13 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:17:07.722 11:31:13 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:17:07.722 11:31:13 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:07.722 11:31:13 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:07.722 11:31:13 app_cmdline -- scripts/common.sh@368 -- # return 0 00:17:07.722 11:31:13 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:07.722 11:31:13 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:07.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.722 --rc genhtml_branch_coverage=1 00:17:07.722 --rc genhtml_function_coverage=1 00:17:07.722 --rc genhtml_legend=1 00:17:07.722 --rc geninfo_all_blocks=1 00:17:07.722 --rc geninfo_unexecuted_blocks=1 00:17:07.722 00:17:07.722 ' 00:17:07.723 11:31:13 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:07.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.723 --rc genhtml_branch_coverage=1 00:17:07.723 --rc genhtml_function_coverage=1 00:17:07.723 --rc genhtml_legend=1 00:17:07.723 --rc geninfo_all_blocks=1 00:17:07.723 --rc geninfo_unexecuted_blocks=1 00:17:07.723 00:17:07.723 ' 00:17:07.723 11:31:13 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:07.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.723 --rc genhtml_branch_coverage=1 00:17:07.723 --rc genhtml_function_coverage=1 00:17:07.723 --rc genhtml_legend=1 00:17:07.723 --rc geninfo_all_blocks=1 00:17:07.723 --rc geninfo_unexecuted_blocks=1 00:17:07.723 00:17:07.723 ' 00:17:07.723 11:31:13 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:07.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.723 --rc genhtml_branch_coverage=1 00:17:07.723 --rc genhtml_function_coverage=1 00:17:07.723 --rc genhtml_legend=1 00:17:07.723 --rc geninfo_all_blocks=1 00:17:07.723 --rc geninfo_unexecuted_blocks=1 00:17:07.723 00:17:07.723 ' 00:17:07.723 11:31:13 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:17:07.723 11:31:13 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60956 00:17:07.723 11:31:13 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:17:07.723 11:31:13 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60956 00:17:07.723 11:31:13 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60956 ']' 00:17:07.723 11:31:13 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.723 11:31:13 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:07.723 11:31:13 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.723 11:31:13 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:07.723 11:31:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:17:07.723 [2024-11-20 11:31:13.419723] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:17:07.723 [2024-11-20 11:31:13.420101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60956 ] 00:17:07.981 [2024-11-20 11:31:13.602272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.238 [2024-11-20 11:31:13.759069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.172 11:31:14 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:09.172 11:31:14 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:17:09.172 11:31:14 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:17:09.172 { 00:17:09.172 "version": "SPDK v25.01-pre git sha1 92fb22519", 00:17:09.172 "fields": { 00:17:09.172 "major": 25, 00:17:09.172 "minor": 1, 00:17:09.172 "patch": 0, 00:17:09.172 "suffix": "-pre", 00:17:09.172 "commit": "92fb22519" 00:17:09.172 } 00:17:09.172 } 00:17:09.172 11:31:14 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:17:09.172 11:31:14 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:17:09.172 11:31:14 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:17:09.172 11:31:14 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:17:09.172 11:31:14 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:17:09.172 11:31:14 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:17:09.172 11:31:14 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.172 11:31:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:17:09.172 11:31:14 app_cmdline -- app/cmdline.sh@26 -- # sort 00:17:09.431 11:31:14 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.431 11:31:14 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:17:09.431 11:31:14 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:17:09.431 11:31:14 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:17:09.431 11:31:14 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:17:09.431 11:31:14 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:17:09.431 11:31:14 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:09.431 11:31:14 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.431 11:31:14 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:09.431 11:31:14 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.431 11:31:14 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:09.431 11:31:14 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.431 11:31:14 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:09.431 11:31:14 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:09.431 11:31:14 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:17:09.689 request: 00:17:09.689 { 00:17:09.689 "method": "env_dpdk_get_mem_stats", 00:17:09.690 "req_id": 1 00:17:09.690 } 00:17:09.690 Got JSON-RPC error response 00:17:09.690 response: 00:17:09.690 { 00:17:09.690 "code": -32601, 00:17:09.690 "message": "Method not found" 00:17:09.690 } 00:17:09.690 11:31:15 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:17:09.690 11:31:15 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:09.690 11:31:15 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:09.690 11:31:15 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:09.690 11:31:15 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60956 00:17:09.690 11:31:15 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60956 ']' 00:17:09.690 11:31:15 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60956 00:17:09.690 11:31:15 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:17:09.690 11:31:15 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:09.690 11:31:15 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60956 00:17:09.690 killing process with pid 60956 00:17:09.690 11:31:15 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:09.690 11:31:15 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:09.690 11:31:15 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60956' 00:17:09.690 11:31:15 app_cmdline -- common/autotest_common.sh@973 -- # kill 60956 00:17:09.690 11:31:15 app_cmdline -- common/autotest_common.sh@978 -- # wait 60956 00:17:12.221 ************************************ 00:17:12.221 END TEST app_cmdline 00:17:12.221 ************************************ 00:17:12.221 00:17:12.221 real 0m4.449s 00:17:12.221 user 0m4.910s 00:17:12.221 sys 0m0.657s 00:17:12.221 11:31:17 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:12.221 11:31:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:17:12.221 11:31:17 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:17:12.221 11:31:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:12.221 11:31:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:12.221 11:31:17 -- common/autotest_common.sh@10 -- # set +x 00:17:12.221 ************************************ 00:17:12.221 START TEST version 00:17:12.221 ************************************ 00:17:12.221 11:31:17 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:17:12.221 * Looking for test storage... 00:17:12.221 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:17:12.221 11:31:17 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:12.221 11:31:17 version -- common/autotest_common.sh@1693 -- # lcov --version 00:17:12.221 11:31:17 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:12.221 11:31:17 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:12.221 11:31:17 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:12.221 11:31:17 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:12.221 11:31:17 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:12.221 11:31:17 version -- scripts/common.sh@336 -- # IFS=.-: 00:17:12.221 11:31:17 version -- scripts/common.sh@336 -- # read -ra ver1 00:17:12.221 11:31:17 version -- scripts/common.sh@337 -- # IFS=.-: 00:17:12.221 11:31:17 version -- scripts/common.sh@337 -- # read -ra ver2 00:17:12.221 11:31:17 version -- scripts/common.sh@338 -- # local 'op=<' 00:17:12.221 11:31:17 version -- scripts/common.sh@340 -- # ver1_l=2 00:17:12.221 11:31:17 version -- scripts/common.sh@341 -- # ver2_l=1 00:17:12.221 11:31:17 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:12.221 11:31:17 version -- scripts/common.sh@344 -- # case "$op" in 00:17:12.221 11:31:17 version -- scripts/common.sh@345 -- # : 1 00:17:12.221 11:31:17 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:12.221 11:31:17 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:12.221 11:31:17 version -- scripts/common.sh@365 -- # decimal 1 00:17:12.221 11:31:17 version -- scripts/common.sh@353 -- # local d=1 00:17:12.221 11:31:17 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:12.221 11:31:17 version -- scripts/common.sh@355 -- # echo 1 00:17:12.221 11:31:17 version -- scripts/common.sh@365 -- # ver1[v]=1 00:17:12.221 11:31:17 version -- scripts/common.sh@366 -- # decimal 2 00:17:12.221 11:31:17 version -- scripts/common.sh@353 -- # local d=2 00:17:12.221 11:31:17 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:12.221 11:31:17 version -- scripts/common.sh@355 -- # echo 2 00:17:12.221 11:31:17 version -- scripts/common.sh@366 -- # ver2[v]=2 00:17:12.221 11:31:17 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:12.221 11:31:17 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:12.221 11:31:17 version -- scripts/common.sh@368 -- # return 0 00:17:12.221 11:31:17 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:12.221 11:31:17 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:12.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.221 --rc genhtml_branch_coverage=1 00:17:12.221 --rc genhtml_function_coverage=1 00:17:12.221 --rc genhtml_legend=1 00:17:12.221 --rc geninfo_all_blocks=1 00:17:12.221 --rc geninfo_unexecuted_blocks=1 00:17:12.221 00:17:12.221 ' 00:17:12.221 11:31:17 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:12.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.221 --rc genhtml_branch_coverage=1 00:17:12.221 --rc genhtml_function_coverage=1 00:17:12.221 --rc genhtml_legend=1 00:17:12.221 --rc geninfo_all_blocks=1 00:17:12.221 --rc geninfo_unexecuted_blocks=1 00:17:12.221 00:17:12.221 ' 00:17:12.221 11:31:17 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:12.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.221 --rc genhtml_branch_coverage=1 00:17:12.221 --rc genhtml_function_coverage=1 00:17:12.221 --rc genhtml_legend=1 00:17:12.221 --rc geninfo_all_blocks=1 00:17:12.221 --rc geninfo_unexecuted_blocks=1 00:17:12.221 00:17:12.221 ' 00:17:12.221 11:31:17 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:12.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.221 --rc genhtml_branch_coverage=1 00:17:12.221 --rc genhtml_function_coverage=1 00:17:12.221 --rc genhtml_legend=1 00:17:12.221 --rc geninfo_all_blocks=1 00:17:12.221 --rc geninfo_unexecuted_blocks=1 00:17:12.221 00:17:12.221 ' 00:17:12.221 11:31:17 version -- app/version.sh@17 -- # get_header_version major 00:17:12.221 11:31:17 version -- app/version.sh@14 -- # cut -f2 00:17:12.221 11:31:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:17:12.221 11:31:17 version -- app/version.sh@14 -- # tr -d '"' 00:17:12.221 11:31:17 version -- app/version.sh@17 -- # major=25 00:17:12.221 11:31:17 version -- app/version.sh@18 -- # get_header_version minor 00:17:12.221 11:31:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:17:12.221 11:31:17 version -- app/version.sh@14 -- # cut -f2 00:17:12.221 11:31:17 version -- app/version.sh@14 -- # tr -d '"' 00:17:12.221 11:31:17 version -- app/version.sh@18 -- # minor=1 00:17:12.221 11:31:17 version -- app/version.sh@19 -- # get_header_version patch 00:17:12.221 11:31:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:17:12.221 11:31:17 version -- app/version.sh@14 -- # cut -f2 00:17:12.221 11:31:17 version -- app/version.sh@14 -- # tr -d '"' 00:17:12.221 11:31:17 version -- app/version.sh@19 -- # patch=0 00:17:12.221 11:31:17 version -- app/version.sh@20 -- # get_header_version suffix 00:17:12.221 11:31:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:17:12.221 11:31:17 version -- app/version.sh@14 -- # tr -d '"' 00:17:12.221 11:31:17 version -- app/version.sh@14 -- # cut -f2 00:17:12.221 11:31:17 version -- app/version.sh@20 -- # suffix=-pre 00:17:12.221 11:31:17 version -- app/version.sh@22 -- # version=25.1 00:17:12.221 11:31:17 version -- app/version.sh@25 -- # (( patch != 0 )) 00:17:12.221 11:31:17 version -- app/version.sh@28 -- # version=25.1rc0 00:17:12.221 11:31:17 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:17:12.221 11:31:17 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:17:12.221 11:31:17 version -- app/version.sh@30 -- # py_version=25.1rc0 00:17:12.221 11:31:17 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:17:12.221 00:17:12.221 real 0m0.282s 00:17:12.221 user 0m0.196s 00:17:12.221 sys 0m0.119s 00:17:12.221 ************************************ 00:17:12.221 END TEST version 00:17:12.221 ************************************ 00:17:12.221 11:31:17 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:12.221 11:31:17 version -- common/autotest_common.sh@10 -- # set +x 00:17:12.221 11:31:17 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:17:12.222 11:31:17 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:17:12.222 11:31:17 -- spdk/autotest.sh@194 -- # uname -s 00:17:12.222 11:31:17 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:17:12.222 11:31:17 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:12.222 11:31:17 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:12.222 11:31:17 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:17:12.222 11:31:17 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:17:12.222 11:31:17 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:12.222 11:31:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:12.222 11:31:17 -- common/autotest_common.sh@10 -- # set +x 00:17:12.222 ************************************ 00:17:12.222 START TEST blockdev_nvme 00:17:12.222 ************************************ 00:17:12.222 11:31:17 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:17:12.480 * Looking for test storage... 00:17:12.480 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:17:12.480 11:31:18 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:12.480 11:31:18 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:12.480 11:31:18 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:17:12.480 11:31:18 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:12.480 11:31:18 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:12.480 11:31:18 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:12.480 11:31:18 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:12.480 11:31:18 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:17:12.480 11:31:18 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:17:12.481 11:31:18 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:17:12.481 11:31:18 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:17:12.481 11:31:18 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:17:12.481 11:31:18 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:17:12.481 11:31:18 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:17:12.481 11:31:18 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:12.481 11:31:18 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:17:12.481 11:31:18 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:17:12.481 11:31:18 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:12.481 11:31:18 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:12.481 11:31:18 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:17:12.481 11:31:18 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:17:12.481 11:31:18 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:12.481 11:31:18 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:17:12.481 11:31:18 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:17:12.481 11:31:18 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:17:12.481 11:31:18 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:17:12.481 11:31:18 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:12.481 11:31:18 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:17:12.481 11:31:18 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:17:12.481 11:31:18 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:12.481 11:31:18 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:12.481 11:31:18 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:17:12.481 11:31:18 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:12.481 11:31:18 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:12.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.481 --rc genhtml_branch_coverage=1 00:17:12.481 --rc genhtml_function_coverage=1 00:17:12.481 --rc genhtml_legend=1 00:17:12.481 --rc geninfo_all_blocks=1 00:17:12.481 --rc geninfo_unexecuted_blocks=1 00:17:12.481 00:17:12.481 ' 00:17:12.481 11:31:18 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:12.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.481 --rc genhtml_branch_coverage=1 00:17:12.481 --rc genhtml_function_coverage=1 00:17:12.481 --rc genhtml_legend=1 00:17:12.481 --rc geninfo_all_blocks=1 00:17:12.481 --rc geninfo_unexecuted_blocks=1 00:17:12.481 00:17:12.481 ' 00:17:12.481 11:31:18 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:12.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.481 --rc genhtml_branch_coverage=1 00:17:12.481 --rc genhtml_function_coverage=1 00:17:12.481 --rc genhtml_legend=1 00:17:12.481 --rc geninfo_all_blocks=1 00:17:12.481 --rc geninfo_unexecuted_blocks=1 00:17:12.481 00:17:12.481 ' 00:17:12.481 11:31:18 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:12.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.481 --rc genhtml_branch_coverage=1 00:17:12.481 --rc genhtml_function_coverage=1 00:17:12.481 --rc genhtml_legend=1 00:17:12.481 --rc geninfo_all_blocks=1 00:17:12.481 --rc geninfo_unexecuted_blocks=1 00:17:12.481 00:17:12.481 ' 00:17:12.481 11:31:18 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:17:12.481 11:31:18 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:17:12.481 11:31:18 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:17:12.481 11:31:18 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:12.481 11:31:18 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:17:12.481 11:31:18 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:17:12.481 11:31:18 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:17:12.481 11:31:18 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:17:12.481 11:31:18 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:17:12.481 11:31:18 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:17:12.481 11:31:18 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:17:12.481 11:31:18 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:17:12.481 11:31:18 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:17:12.481 11:31:18 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:17:12.481 11:31:18 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:17:12.481 11:31:18 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:17:12.481 11:31:18 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:17:12.481 11:31:18 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:17:12.481 11:31:18 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:17:12.481 11:31:18 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:17:12.481 11:31:18 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:17:12.481 11:31:18 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:17:12.481 11:31:18 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:17:12.481 11:31:18 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:17:12.481 11:31:18 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61144 00:17:12.481 11:31:18 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:17:12.481 11:31:18 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:17:12.481 11:31:18 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61144 00:17:12.481 11:31:18 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 61144 ']' 00:17:12.481 11:31:18 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.481 11:31:18 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:12.481 11:31:18 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.481 11:31:18 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:12.481 11:31:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:12.740 [2024-11-20 11:31:18.257180] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:17:12.740 [2024-11-20 11:31:18.257855] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61144 ] 00:17:12.740 [2024-11-20 11:31:18.441087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.998 [2024-11-20 11:31:18.595948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.935 11:31:19 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:13.935 11:31:19 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:17:13.935 11:31:19 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:17:13.935 11:31:19 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:17:13.935 11:31:19 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:17:13.935 11:31:19 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:17:13.935 11:31:19 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:13.935 11:31:19 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:17:13.935 11:31:19 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.935 11:31:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:14.193 11:31:19 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.193 11:31:19 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:17:14.193 11:31:19 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.193 11:31:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:14.193 11:31:19 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.193 11:31:19 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:17:14.193 11:31:19 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:17:14.193 11:31:19 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.193 11:31:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:14.193 11:31:19 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.193 11:31:19 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:17:14.193 11:31:19 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.193 11:31:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:14.193 11:31:19 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.193 11:31:19 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:17:14.193 11:31:19 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.193 11:31:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:14.452 11:31:19 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.452 11:31:19 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:17:14.452 11:31:19 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:17:14.452 11:31:19 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:17:14.452 11:31:19 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.452 11:31:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:14.452 11:31:20 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.452 11:31:20 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:17:14.452 11:31:20 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:17:14.453 11:31:20 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "e589cc79-d392-46dd-9d74-1b450f8849c9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "e589cc79-d392-46dd-9d74-1b450f8849c9",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "ead91021-034e-4a35-aad3-5aca267999ee"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "ead91021-034e-4a35-aad3-5aca267999ee",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "9371d037-838b-43a8-8899-46e1612e2263"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9371d037-838b-43a8-8899-46e1612e2263",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "b3969399-4b66-4ea1-a97a-81b6508f1bb5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b3969399-4b66-4ea1-a97a-81b6508f1bb5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "9630bc33-1802-48d9-bfb7-68dfc0c392d9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9630bc33-1802-48d9-bfb7-68dfc0c392d9",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "87242de2-3946-4647-b048-0c382d448ed7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "87242de2-3946-4647-b048-0c382d448ed7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:17:14.453 11:31:20 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:17:14.453 11:31:20 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:17:14.453 11:31:20 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:17:14.453 11:31:20 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 61144 00:17:14.453 11:31:20 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 61144 ']' 00:17:14.453 11:31:20 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 61144 00:17:14.453 11:31:20 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:17:14.453 11:31:20 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:14.453 11:31:20 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61144 00:17:14.453 killing process with pid 61144 00:17:14.453 11:31:20 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:14.453 11:31:20 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:14.453 11:31:20 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61144' 00:17:14.453 11:31:20 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 61144 00:17:14.453 11:31:20 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 61144 00:17:16.982 11:31:22 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:16.982 11:31:22 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:17:16.982 11:31:22 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:16.982 11:31:22 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:16.982 11:31:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:16.982 ************************************ 00:17:16.982 START TEST bdev_hello_world 00:17:16.982 ************************************ 00:17:16.982 11:31:22 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:17:16.982 [2024-11-20 11:31:22.515183] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:17:16.982 [2024-11-20 11:31:22.515662] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61245 ] 00:17:16.982 [2024-11-20 11:31:22.691303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.239 [2024-11-20 11:31:22.824235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.805 [2024-11-20 11:31:23.485417] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:17:17.805 [2024-11-20 11:31:23.485488] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:17:17.805 [2024-11-20 11:31:23.485522] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:17:17.805 [2024-11-20 11:31:23.488801] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:17:17.805 [2024-11-20 11:31:23.489437] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:17:17.805 [2024-11-20 11:31:23.489482] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:17:17.805 [2024-11-20 11:31:23.489718] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:17:17.805 00:17:17.805 [2024-11-20 11:31:23.489767] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:17:19.180 ************************************ 00:17:19.180 END TEST bdev_hello_world 00:17:19.180 ************************************ 00:17:19.180 00:17:19.180 real 0m2.135s 00:17:19.180 user 0m1.767s 00:17:19.180 sys 0m0.255s 00:17:19.180 11:31:24 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:19.180 11:31:24 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:17:19.180 11:31:24 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:17:19.180 11:31:24 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:19.180 11:31:24 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:19.180 11:31:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:19.180 ************************************ 00:17:19.180 START TEST bdev_bounds 00:17:19.180 ************************************ 00:17:19.180 11:31:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:17:19.180 11:31:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61287 00:17:19.180 11:31:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:17:19.180 Process bdevio pid: 61287 00:17:19.180 11:31:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:19.180 11:31:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61287' 00:17:19.180 11:31:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61287 00:17:19.180 11:31:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61287 ']' 00:17:19.180 11:31:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.180 11:31:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:19.180 11:31:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.180 11:31:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:19.180 11:31:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:19.180 [2024-11-20 11:31:24.714446] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:17:19.180 [2024-11-20 11:31:24.715622] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61287 ] 00:17:19.180 [2024-11-20 11:31:24.911408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:19.438 [2024-11-20 11:31:25.074475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:19.438 [2024-11-20 11:31:25.074605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.438 [2024-11-20 11:31:25.074617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.374 11:31:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:20.374 11:31:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:17:20.374 11:31:25 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:17:20.374 I/O targets: 00:17:20.374 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:17:20.374 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:17:20.374 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:20.374 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:20.374 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:20.374 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:17:20.374 00:17:20.374 00:17:20.374 CUnit - A unit testing framework for C - Version 2.1-3 00:17:20.374 http://cunit.sourceforge.net/ 00:17:20.374 00:17:20.374 00:17:20.374 Suite: bdevio tests on: Nvme3n1 00:17:20.374 Test: blockdev write read block ...passed 00:17:20.374 Test: blockdev write zeroes read block ...passed 00:17:20.374 Test: blockdev write zeroes read no split ...passed 00:17:20.374 Test: blockdev write zeroes read split ...passed 00:17:20.374 Test: blockdev write zeroes read split partial ...passed 00:17:20.374 Test: blockdev reset ...[2024-11-20 11:31:26.002081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:17:20.374 passed 00:17:20.374 Test: blockdev write read 8 blocks ...[2024-11-20 11:31:26.006035] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:17:20.374 passed 00:17:20.374 Test: blockdev write read size > 128k ...passed 00:17:20.374 Test: blockdev write read invalid size ...passed 00:17:20.374 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:20.374 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:20.374 Test: blockdev write read max offset ...passed 00:17:20.374 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:20.374 Test: blockdev writev readv 8 blocks ...passed 00:17:20.374 Test: blockdev writev readv 30 x 1block ...passed 00:17:20.374 Test: blockdev writev readv block ...passed 00:17:20.374 Test: blockdev writev readv size > 128k ...passed 00:17:20.374 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:20.374 Test: blockdev comparev and writev ...[2024-11-20 11:31:26.013744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c300a000 len:0x1000 00:17:20.374 [2024-11-20 11:31:26.013808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:17:20.374 passed 00:17:20.374 Test: blockdev nvme passthru rw ...passed 00:17:20.374 Test: blockdev nvme passthru vendor specific ...passed 00:17:20.374 Test: blockdev nvme admin passthru ...[2024-11-20 11:31:26.014734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:17:20.375 [2024-11-20 11:31:26.014784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:17:20.375 passed 00:17:20.375 Test: blockdev copy ...passed 00:17:20.375 Suite: bdevio tests on: Nvme2n3 00:17:20.375 Test: blockdev write read block ...passed 00:17:20.375 Test: blockdev write zeroes read block ...passed 00:17:20.375 Test: blockdev write zeroes read no split ...passed 00:17:20.375 Test: blockdev write zeroes read split ...passed 00:17:20.375 Test: blockdev write zeroes read split partial ...passed 00:17:20.375 Test: blockdev reset ...[2024-11-20 11:31:26.079199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:17:20.375 [2024-11-20 11:31:26.083560] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:17:20.375 passed 00:17:20.375 Test: blockdev write read 8 blocks ...passed 00:17:20.375 Test: blockdev write read size > 128k ...passed 00:17:20.375 Test: blockdev write read invalid size ...passed 00:17:20.375 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:20.375 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:20.375 Test: blockdev write read max offset ...passed 00:17:20.375 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:20.375 Test: blockdev writev readv 8 blocks ...passed 00:17:20.375 Test: blockdev writev readv 30 x 1block ...passed 00:17:20.375 Test: blockdev writev readv block ...passed 00:17:20.375 Test: blockdev writev readv size > 128k ...passed 00:17:20.375 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:20.375 Test: blockdev comparev and writev ...[2024-11-20 11:31:26.091945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:17:20.375 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2a6a06000 len:0x1000 00:17:20.375 [2024-11-20 11:31:26.092139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:17:20.375 passed 00:17:20.375 Test: blockdev nvme passthru vendor specific ...passed 00:17:20.375 Test: blockdev nvme admin passthru ...[2024-11-20 11:31:26.092940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:17:20.375 [2024-11-20 11:31:26.092994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:17:20.375 passed 00:17:20.375 Test: blockdev copy ...passed 00:17:20.375 Suite: bdevio tests on: Nvme2n2 00:17:20.375 Test: blockdev write read block ...passed 00:17:20.375 Test: blockdev write zeroes read block ...passed 00:17:20.375 Test: blockdev write zeroes read no split ...passed 00:17:20.375 Test: blockdev write zeroes read split ...passed 00:17:20.632 Test: blockdev write zeroes read split partial ...passed 00:17:20.632 Test: blockdev reset ...[2024-11-20 11:31:26.157141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:17:20.632 passed 00:17:20.632 Test: blockdev write read 8 blocks ...[2024-11-20 11:31:26.161443] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:17:20.632 passed 00:17:20.632 Test: blockdev write read size > 128k ...passed 00:17:20.632 Test: blockdev write read invalid size ...passed 00:17:20.632 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:20.632 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:20.632 Test: blockdev write read max offset ...passed 00:17:20.632 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:20.632 Test: blockdev writev readv 8 blocks ...passed 00:17:20.632 Test: blockdev writev readv 30 x 1block ...passed 00:17:20.632 Test: blockdev writev readv block ...passed 00:17:20.632 Test: blockdev writev readv size > 128k ...passed 00:17:20.632 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:20.632 Test: blockdev comparev and writev ...[2024-11-20 11:31:26.169416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2de83c000 len:0x1000 00:17:20.632 [2024-11-20 11:31:26.169477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:17:20.632 passed 00:17:20.632 Test: blockdev nvme passthru rw ...passed 00:17:20.632 Test: blockdev nvme passthru vendor specific ...passed 00:17:20.632 Test: blockdev nvme admin passthru ...[2024-11-20 11:31:26.170359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:17:20.632 [2024-11-20 11:31:26.170405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:17:20.632 passed 00:17:20.632 Test: blockdev copy ...passed 00:17:20.632 Suite: bdevio tests on: Nvme2n1 00:17:20.632 Test: blockdev write read block ...passed 00:17:20.632 Test: blockdev write zeroes read block ...passed 00:17:20.632 Test: blockdev write zeroes read no split ...passed 00:17:20.632 Test: blockdev write zeroes read split ...passed 00:17:20.632 Test: blockdev write zeroes read split partial ...passed 00:17:20.632 Test: blockdev reset ...[2024-11-20 11:31:26.237040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:17:20.632 [2024-11-20 11:31:26.241369] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:17:20.632 passed 00:17:20.632 Test: blockdev write read 8 blocks ...passed 00:17:20.632 Test: blockdev write read size > 128k ...passed 00:17:20.632 Test: blockdev write read invalid size ...passed 00:17:20.632 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:20.632 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:20.632 Test: blockdev write read max offset ...passed 00:17:20.632 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:20.632 Test: blockdev writev readv 8 blocks ...passed 00:17:20.632 Test: blockdev writev readv 30 x 1block ...passed 00:17:20.632 Test: blockdev writev readv block ...passed 00:17:20.632 Test: blockdev writev readv size > 128k ...passed 00:17:20.632 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:20.632 Test: blockdev comparev and writev ...[2024-11-20 11:31:26.250050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:17:20.632 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2de838000 len:0x1000 00:17:20.632 [2024-11-20 11:31:26.250241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:17:20.632 passed 00:17:20.632 Test: blockdev nvme passthru vendor specific ...[2024-11-20 11:31:26.251036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:17:20.632 [2024-11-20 11:31:26.251078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:17:20.632 passed 00:17:20.632 Test: blockdev nvme admin passthru ...passed 00:17:20.632 Test: blockdev copy ...passed 00:17:20.632 Suite: bdevio tests on: Nvme1n1 00:17:20.632 Test: blockdev write read block ...passed 00:17:20.632 Test: blockdev write zeroes read block ...passed 00:17:20.632 Test: blockdev write zeroes read no split ...passed 00:17:20.632 Test: blockdev write zeroes read split ...passed 00:17:20.632 Test: blockdev write zeroes read split partial ...passed 00:17:20.632 Test: blockdev reset ...[2024-11-20 11:31:26.315807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:17:20.632 passed 00:17:20.632 Test: blockdev write read 8 blocks ...[2024-11-20 11:31:26.319565] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:17:20.632 passed 00:17:20.632 Test: blockdev write read size > 128k ...passed 00:17:20.632 Test: blockdev write read invalid size ...passed 00:17:20.632 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:20.632 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:20.632 Test: blockdev write read max offset ...passed 00:17:20.632 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:20.632 Test: blockdev writev readv 8 blocks ...passed 00:17:20.632 Test: blockdev writev readv 30 x 1block ...passed 00:17:20.632 Test: blockdev writev readv block ...passed 00:17:20.632 Test: blockdev writev readv size > 128k ...passed 00:17:20.632 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:20.632 Test: blockdev comparev and writev ...[2024-11-20 11:31:26.327559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2de834000 len:0x1000 00:17:20.632 [2024-11-20 11:31:26.327621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:17:20.632 passed 00:17:20.632 Test: blockdev nvme passthru rw ...passed 00:17:20.632 Test: blockdev nvme passthru vendor specific ...passed 00:17:20.632 Test: blockdev nvme admin passthru ...[2024-11-20 11:31:26.328443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:17:20.632 [2024-11-20 11:31:26.328490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:17:20.632 passed 00:17:20.632 Test: blockdev copy ...passed 00:17:20.632 Suite: bdevio tests on: Nvme0n1 00:17:20.632 Test: blockdev write read block ...passed 00:17:20.632 Test: blockdev write zeroes read block ...passed 00:17:20.632 Test: blockdev write zeroes read no split ...passed 00:17:20.632 Test: blockdev write zeroes read split ...passed 00:17:20.632 Test: blockdev write zeroes read split partial ...passed 00:17:20.632 Test: blockdev reset ...[2024-11-20 11:31:26.394930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:17:20.890 [2024-11-20 11:31:26.398788] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:17:20.890 passed 00:17:20.890 Test: blockdev write read 8 blocks ...passed 00:17:20.890 Test: blockdev write read size > 128k ...passed 00:17:20.890 Test: blockdev write read invalid size ...passed 00:17:20.890 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:20.890 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:20.890 Test: blockdev write read max offset ...passed 00:17:20.890 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:20.890 Test: blockdev writev readv 8 blocks ...passed 00:17:20.890 Test: blockdev writev readv 30 x 1block ...passed 00:17:20.890 Test: blockdev writev readv block ...passed 00:17:20.890 Test: blockdev writev readv size > 128k ...passed 00:17:20.890 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:20.890 Test: blockdev comparev and writev ...passed 00:17:20.890 Test: blockdev nvme passthru rw ...[2024-11-20 11:31:26.406800] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:17:20.890 separate metadata which is not supported yet. 00:17:20.890 passed 00:17:20.890 Test: blockdev nvme passthru vendor specific ...[2024-11-20 11:31:26.407542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 Ppassed 00:17:20.890 Test: blockdev nvme admin passthru ...RP2 0x0 00:17:20.890 [2024-11-20 11:31:26.407728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:17:20.890 passed 00:17:20.890 Test: blockdev copy ...passed 00:17:20.890 00:17:20.890 Run Summary: Type Total Ran Passed Failed Inactive 00:17:20.890 suites 6 6 n/a 0 0 00:17:20.890 tests 138 138 138 0 0 00:17:20.890 asserts 893 893 893 0 n/a 00:17:20.890 00:17:20.890 Elapsed time = 1.259 seconds 00:17:20.890 0 00:17:20.890 11:31:26 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61287 00:17:20.890 11:31:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61287 ']' 00:17:20.890 11:31:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61287 00:17:20.890 11:31:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:17:20.890 11:31:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:20.890 11:31:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61287 00:17:20.890 11:31:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:20.890 11:31:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:20.890 killing process with pid 61287 00:17:20.891 11:31:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61287' 00:17:20.891 11:31:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61287 00:17:20.891 11:31:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61287 00:17:21.826 11:31:27 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:17:21.826 00:17:21.826 real 0m2.834s 00:17:21.826 user 0m7.290s 00:17:21.826 sys 0m0.429s 00:17:21.826 11:31:27 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:21.826 11:31:27 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:21.826 ************************************ 00:17:21.826 END TEST bdev_bounds 00:17:21.826 ************************************ 00:17:21.826 11:31:27 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:17:21.826 11:31:27 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:21.826 11:31:27 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:21.826 11:31:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:21.826 ************************************ 00:17:21.826 START TEST bdev_nbd 00:17:21.826 ************************************ 00:17:21.826 11:31:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:17:21.826 11:31:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:17:21.826 11:31:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:17:21.826 11:31:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:21.826 11:31:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:21.826 11:31:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:17:21.826 11:31:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:17:21.826 11:31:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:17:21.826 11:31:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:17:21.826 11:31:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:17:21.826 11:31:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:17:21.826 11:31:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:17:21.826 11:31:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:21.826 11:31:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:17:21.826 11:31:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:17:21.826 11:31:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:17:21.826 11:31:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61349 00:17:21.826 11:31:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:21.826 11:31:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:17:21.826 11:31:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61349 /var/tmp/spdk-nbd.sock 00:17:21.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:21.826 11:31:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61349 ']' 00:17:21.826 11:31:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:21.826 11:31:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:21.826 11:31:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:21.826 11:31:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:21.826 11:31:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:22.083 [2024-11-20 11:31:27.590358] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:17:22.083 [2024-11-20 11:31:27.590520] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.083 [2024-11-20 11:31:27.775517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.341 [2024-11-20 11:31:27.906972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.905 11:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:22.905 11:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:17:22.905 11:31:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:17:22.905 11:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:22.905 11:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:17:22.905 11:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:17:22.905 11:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:17:22.905 11:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:22.905 11:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:17:22.905 11:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:17:22.905 11:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:17:22.905 11:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:17:22.905 11:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:17:22.905 11:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:22.905 11:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:17:23.162 11:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:17:23.162 11:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:17:23.162 11:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:17:23.162 11:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:23.162 11:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:23.162 11:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:23.162 11:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:23.162 11:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:23.162 11:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:23.162 11:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:23.162 11:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:23.162 11:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:23.162 1+0 records in 00:17:23.162 1+0 records out 00:17:23.162 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000600211 s, 6.8 MB/s 00:17:23.162 11:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.162 11:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:23.162 11:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.162 11:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:23.162 11:31:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:23.162 11:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:23.162 11:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:23.162 11:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:17:23.724 11:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:17:23.724 11:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:17:23.724 11:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:17:23.724 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:23.724 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:23.724 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:23.724 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:23.724 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:23.724 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:23.724 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:23.724 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:23.724 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:23.724 1+0 records in 00:17:23.724 1+0 records out 00:17:23.724 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000592004 s, 6.9 MB/s 00:17:23.724 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.724 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:23.724 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.724 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:23.724 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:23.724 11:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:23.724 11:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:23.724 11:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:17:23.979 11:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:17:23.979 11:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:17:23.979 11:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:17:23.979 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:17:23.979 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:23.979 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:23.979 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:23.979 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:17:23.979 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:23.979 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:23.979 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:23.979 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:23.979 1+0 records in 00:17:23.979 1+0 records out 00:17:23.979 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000684369 s, 6.0 MB/s 00:17:23.979 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.979 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:23.980 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.980 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:23.980 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:23.980 11:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:23.980 11:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:23.980 11:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:17:24.236 11:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:17:24.236 11:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:17:24.236 11:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:17:24.236 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:17:24.236 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:24.236 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:24.236 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:24.236 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:17:24.236 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:24.236 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:24.236 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:24.236 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:24.236 1+0 records in 00:17:24.236 1+0 records out 00:17:24.236 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000669217 s, 6.1 MB/s 00:17:24.236 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.236 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:24.236 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.236 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:24.236 11:31:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:24.236 11:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:24.236 11:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:24.236 11:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:17:24.801 11:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:17:24.801 11:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:17:24.801 11:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:17:24.801 11:31:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:17:24.801 11:31:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:24.801 11:31:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:24.801 11:31:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:24.801 11:31:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:17:24.801 11:31:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:24.801 11:31:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:24.801 11:31:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:24.801 11:31:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:24.801 1+0 records in 00:17:24.801 1+0 records out 00:17:24.801 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00082285 s, 5.0 MB/s 00:17:24.801 11:31:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.801 11:31:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:24.801 11:31:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.801 11:31:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:24.801 11:31:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:24.801 11:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:24.801 11:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:24.801 11:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:17:24.801 11:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:17:24.801 11:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:17:25.060 11:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:17:25.060 11:31:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:17:25.060 11:31:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:25.060 11:31:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:25.060 11:31:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:25.060 11:31:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:17:25.060 11:31:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:25.060 11:31:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:25.060 11:31:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:25.060 11:31:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:25.060 1+0 records in 00:17:25.060 1+0 records out 00:17:25.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000611198 s, 6.7 MB/s 00:17:25.060 11:31:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:25.060 11:31:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:25.060 11:31:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:25.060 11:31:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:25.060 11:31:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:25.060 11:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:25.060 11:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:25.060 11:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:25.318 11:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:17:25.318 { 00:17:25.318 "nbd_device": "/dev/nbd0", 00:17:25.318 "bdev_name": "Nvme0n1" 00:17:25.318 }, 00:17:25.318 { 00:17:25.318 "nbd_device": "/dev/nbd1", 00:17:25.318 "bdev_name": "Nvme1n1" 00:17:25.318 }, 00:17:25.318 { 00:17:25.318 "nbd_device": "/dev/nbd2", 00:17:25.318 "bdev_name": "Nvme2n1" 00:17:25.318 }, 00:17:25.318 { 00:17:25.318 "nbd_device": "/dev/nbd3", 00:17:25.318 "bdev_name": "Nvme2n2" 00:17:25.318 }, 00:17:25.318 { 00:17:25.318 "nbd_device": "/dev/nbd4", 00:17:25.318 "bdev_name": "Nvme2n3" 00:17:25.318 }, 00:17:25.318 { 00:17:25.318 "nbd_device": "/dev/nbd5", 00:17:25.318 "bdev_name": "Nvme3n1" 00:17:25.318 } 00:17:25.318 ]' 00:17:25.318 11:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:17:25.318 11:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:17:25.318 { 00:17:25.318 "nbd_device": "/dev/nbd0", 00:17:25.318 "bdev_name": "Nvme0n1" 00:17:25.318 }, 00:17:25.318 { 00:17:25.318 "nbd_device": "/dev/nbd1", 00:17:25.318 "bdev_name": "Nvme1n1" 00:17:25.318 }, 00:17:25.318 { 00:17:25.318 "nbd_device": "/dev/nbd2", 00:17:25.318 "bdev_name": "Nvme2n1" 00:17:25.318 }, 00:17:25.318 { 00:17:25.318 "nbd_device": "/dev/nbd3", 00:17:25.318 "bdev_name": "Nvme2n2" 00:17:25.318 }, 00:17:25.318 { 00:17:25.318 "nbd_device": "/dev/nbd4", 00:17:25.318 "bdev_name": "Nvme2n3" 00:17:25.318 }, 00:17:25.318 { 00:17:25.318 "nbd_device": "/dev/nbd5", 00:17:25.318 "bdev_name": "Nvme3n1" 00:17:25.318 } 00:17:25.318 ]' 00:17:25.318 11:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:17:25.318 11:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:17:25.318 11:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:25.318 11:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:17:25.318 11:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:25.318 11:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:25.318 11:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:25.318 11:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:25.575 11:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:25.575 11:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:25.575 11:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:25.575 11:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:25.575 11:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:25.575 11:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:25.575 11:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:25.575 11:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:25.575 11:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:25.575 11:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:25.833 11:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:25.833 11:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:25.833 11:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:25.833 11:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:25.833 11:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:25.833 11:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:25.833 11:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:25.833 11:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:25.833 11:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:25.833 11:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:17:26.090 11:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:17:26.090 11:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:17:26.090 11:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:17:26.090 11:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:26.090 11:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:26.090 11:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:17:26.090 11:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:26.090 11:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:26.090 11:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:26.090 11:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:17:26.349 11:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:17:26.349 11:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:17:26.349 11:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:17:26.349 11:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:26.349 11:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:26.349 11:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:17:26.349 11:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:26.349 11:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:26.349 11:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:26.349 11:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:17:26.915 11:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:17:26.915 11:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:17:26.915 11:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:17:26.915 11:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:26.915 11:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:26.915 11:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:17:26.915 11:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:26.915 11:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:26.915 11:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:26.915 11:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:17:27.173 11:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:17:27.173 11:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:17:27.173 11:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:17:27.173 11:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:27.173 11:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:27.173 11:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:17:27.173 11:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:27.173 11:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:27.173 11:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:27.173 11:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:27.173 11:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:27.433 11:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:27.433 11:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:27.433 11:31:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:27.433 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:27.433 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:27.433 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:27.433 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:27.433 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:27.433 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:27.433 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:17:27.433 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:17:27.433 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:17:27.433 11:31:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:27.433 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:27.433 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:17:27.433 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:27.433 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:27.433 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:27.433 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:27.433 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:27.433 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:17:27.433 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:27.433 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:27.433 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:27.433 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:17:27.433 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:27.433 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:27.433 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:17:27.692 /dev/nbd0 00:17:27.692 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:27.692 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:27.692 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:27.692 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:27.692 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:27.692 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:27.692 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:27.692 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:27.692 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:27.692 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:27.692 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:27.692 1+0 records in 00:17:27.692 1+0 records out 00:17:27.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000741889 s, 5.5 MB/s 00:17:27.692 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:27.692 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:27.692 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:27.692 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:27.692 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:27.692 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:27.692 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:27.692 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:17:27.950 /dev/nbd1 00:17:27.950 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:27.950 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:27.950 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:27.950 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:27.950 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:27.950 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:27.950 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:27.950 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:27.950 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:27.950 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:27.950 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:27.950 1+0 records in 00:17:27.950 1+0 records out 00:17:27.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000569174 s, 7.2 MB/s 00:17:27.950 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:27.950 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:27.950 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:27.950 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:27.950 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:27.950 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:27.950 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:27.950 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:17:28.208 /dev/nbd10 00:17:28.208 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:17:28.208 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:17:28.208 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:17:28.208 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:28.208 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:28.208 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:28.208 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:17:28.208 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:28.208 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:28.208 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:28.208 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:28.208 1+0 records in 00:17:28.208 1+0 records out 00:17:28.208 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000546102 s, 7.5 MB/s 00:17:28.208 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:28.208 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:28.208 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:28.208 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:28.208 11:31:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:28.208 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:28.208 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:28.208 11:31:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:17:28.774 /dev/nbd11 00:17:28.774 11:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:17:28.774 11:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:17:28.774 11:31:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:17:28.774 11:31:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:28.774 11:31:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:28.774 11:31:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:28.774 11:31:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:17:28.774 11:31:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:28.774 11:31:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:28.774 11:31:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:28.774 11:31:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:28.774 1+0 records in 00:17:28.774 1+0 records out 00:17:28.774 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000584025 s, 7.0 MB/s 00:17:28.774 11:31:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:28.774 11:31:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:28.774 11:31:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:28.774 11:31:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:28.774 11:31:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:28.774 11:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:28.774 11:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:28.774 11:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:17:29.031 /dev/nbd12 00:17:29.031 11:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:17:29.031 11:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:17:29.031 11:31:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:17:29.031 11:31:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:29.031 11:31:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:29.031 11:31:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:29.031 11:31:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:17:29.031 11:31:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:29.032 11:31:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:29.032 11:31:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:29.032 11:31:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:29.032 1+0 records in 00:17:29.032 1+0 records out 00:17:29.032 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000634645 s, 6.5 MB/s 00:17:29.032 11:31:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:29.032 11:31:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:29.032 11:31:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:29.032 11:31:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:29.032 11:31:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:29.032 11:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:29.032 11:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:29.032 11:31:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:17:29.289 /dev/nbd13 00:17:29.289 11:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:17:29.289 11:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:17:29.289 11:31:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:17:29.289 11:31:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:29.289 11:31:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:29.289 11:31:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:29.289 11:31:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:17:29.289 11:31:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:29.289 11:31:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:29.289 11:31:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:29.289 11:31:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:29.289 1+0 records in 00:17:29.289 1+0 records out 00:17:29.289 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000577884 s, 7.1 MB/s 00:17:29.289 11:31:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:29.289 11:31:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:29.289 11:31:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:29.289 11:31:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:29.289 11:31:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:29.289 11:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:29.289 11:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:29.289 11:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:29.289 11:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:29.289 11:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:29.547 11:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:29.547 { 00:17:29.547 "nbd_device": "/dev/nbd0", 00:17:29.547 "bdev_name": "Nvme0n1" 00:17:29.547 }, 00:17:29.547 { 00:17:29.547 "nbd_device": "/dev/nbd1", 00:17:29.547 "bdev_name": "Nvme1n1" 00:17:29.547 }, 00:17:29.547 { 00:17:29.547 "nbd_device": "/dev/nbd10", 00:17:29.547 "bdev_name": "Nvme2n1" 00:17:29.547 }, 00:17:29.547 { 00:17:29.547 "nbd_device": "/dev/nbd11", 00:17:29.547 "bdev_name": "Nvme2n2" 00:17:29.547 }, 00:17:29.547 { 00:17:29.547 "nbd_device": "/dev/nbd12", 00:17:29.547 "bdev_name": "Nvme2n3" 00:17:29.547 }, 00:17:29.547 { 00:17:29.547 "nbd_device": "/dev/nbd13", 00:17:29.547 "bdev_name": "Nvme3n1" 00:17:29.547 } 00:17:29.547 ]' 00:17:29.547 11:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:29.547 { 00:17:29.547 "nbd_device": "/dev/nbd0", 00:17:29.547 "bdev_name": "Nvme0n1" 00:17:29.547 }, 00:17:29.547 { 00:17:29.547 "nbd_device": "/dev/nbd1", 00:17:29.547 "bdev_name": "Nvme1n1" 00:17:29.547 }, 00:17:29.547 { 00:17:29.547 "nbd_device": "/dev/nbd10", 00:17:29.547 "bdev_name": "Nvme2n1" 00:17:29.547 }, 00:17:29.547 { 00:17:29.547 "nbd_device": "/dev/nbd11", 00:17:29.547 "bdev_name": "Nvme2n2" 00:17:29.547 }, 00:17:29.547 { 00:17:29.547 "nbd_device": "/dev/nbd12", 00:17:29.547 "bdev_name": "Nvme2n3" 00:17:29.547 }, 00:17:29.547 { 00:17:29.547 "nbd_device": "/dev/nbd13", 00:17:29.547 "bdev_name": "Nvme3n1" 00:17:29.547 } 00:17:29.547 ]' 00:17:29.547 11:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:29.805 11:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:17:29.805 /dev/nbd1 00:17:29.805 /dev/nbd10 00:17:29.805 /dev/nbd11 00:17:29.805 /dev/nbd12 00:17:29.805 /dev/nbd13' 00:17:29.805 11:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:17:29.805 /dev/nbd1 00:17:29.805 /dev/nbd10 00:17:29.805 /dev/nbd11 00:17:29.805 /dev/nbd12 00:17:29.805 /dev/nbd13' 00:17:29.805 11:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:29.805 11:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:17:29.805 11:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:17:29.805 11:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:17:29.805 11:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:17:29.805 11:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:17:29.805 11:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:29.805 11:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:29.805 11:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:29.805 11:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:29.805 11:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:29.805 11:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:17:29.805 256+0 records in 00:17:29.805 256+0 records out 00:17:29.805 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00749178 s, 140 MB/s 00:17:29.805 11:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:29.805 11:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:29.805 256+0 records in 00:17:29.805 256+0 records out 00:17:29.805 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146142 s, 7.2 MB/s 00:17:29.805 11:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:29.805 11:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:17:30.062 256+0 records in 00:17:30.062 256+0 records out 00:17:30.062 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.161555 s, 6.5 MB/s 00:17:30.062 11:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:30.062 11:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:17:30.320 256+0 records in 00:17:30.320 256+0 records out 00:17:30.320 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.169752 s, 6.2 MB/s 00:17:30.320 11:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:30.320 11:31:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:17:30.320 256+0 records in 00:17:30.320 256+0 records out 00:17:30.320 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157488 s, 6.7 MB/s 00:17:30.320 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:30.320 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:17:30.578 256+0 records in 00:17:30.578 256+0 records out 00:17:30.578 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157831 s, 6.6 MB/s 00:17:30.578 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:30.578 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:17:30.836 256+0 records in 00:17:30.836 256+0 records out 00:17:30.836 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163556 s, 6.4 MB/s 00:17:30.836 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:17:30.836 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:30.837 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:30.837 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:30.837 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:30.837 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:30.837 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:30.837 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:30.837 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:17:30.837 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:30.837 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:17:30.837 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:30.837 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:17:30.837 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:30.837 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:17:30.837 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:30.837 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:17:30.837 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:30.837 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:17:30.837 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:30.837 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:30.837 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:30.837 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:30.837 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:30.837 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:30.837 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:30.837 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:31.096 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:31.096 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:31.096 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:31.096 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:31.096 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:31.096 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:31.096 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:31.096 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:31.096 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:31.096 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:31.354 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:31.354 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:31.354 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:31.354 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:31.354 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:31.354 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:31.354 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:31.354 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:31.354 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:31.354 11:31:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:17:31.614 11:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:17:31.614 11:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:17:31.614 11:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:17:31.614 11:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:31.614 11:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:31.614 11:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:17:31.614 11:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:31.614 11:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:31.614 11:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:31.614 11:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:17:31.872 11:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:17:31.872 11:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:17:31.872 11:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:17:31.872 11:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:31.872 11:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:31.872 11:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:17:31.872 11:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:31.872 11:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:31.872 11:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:31.872 11:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:17:32.130 11:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:17:32.130 11:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:17:32.130 11:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:17:32.130 11:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:32.130 11:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:32.130 11:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:17:32.130 11:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:32.130 11:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:32.130 11:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:32.130 11:31:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:17:32.696 11:31:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:17:32.696 11:31:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:17:32.696 11:31:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:17:32.696 11:31:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:32.696 11:31:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:32.696 11:31:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:17:32.696 11:31:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:32.696 11:31:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:32.696 11:31:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:32.696 11:31:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:32.696 11:31:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:32.696 11:31:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:32.696 11:31:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:32.696 11:31:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:32.955 11:31:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:32.955 11:31:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:32.955 11:31:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:32.955 11:31:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:32.955 11:31:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:32.955 11:31:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:32.955 11:31:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:17:32.955 11:31:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:32.955 11:31:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:17:32.955 11:31:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:32.955 11:31:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:32.955 11:31:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:17:32.955 11:31:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:17:33.214 malloc_lvol_verify 00:17:33.214 11:31:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:17:33.472 191a0022-d3c7-492b-8fcf-d84d8f023c31 00:17:33.472 11:31:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:17:33.730 ce686774-2baa-44e7-8005-76c897fee86d 00:17:33.730 11:31:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:17:33.989 /dev/nbd0 00:17:33.989 11:31:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:17:33.989 11:31:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:17:33.989 11:31:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:17:33.989 11:31:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:17:33.989 11:31:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:17:33.989 mke2fs 1.47.0 (5-Feb-2023) 00:17:33.989 Discarding device blocks: 0/4096 done 00:17:33.989 Creating filesystem with 4096 1k blocks and 1024 inodes 00:17:33.989 00:17:33.989 Allocating group tables: 0/1 done 00:17:33.989 Writing inode tables: 0/1 done 00:17:33.989 Creating journal (1024 blocks): done 00:17:33.989 Writing superblocks and filesystem accounting information: 0/1 done 00:17:33.989 00:17:33.989 11:31:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:33.989 11:31:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:33.989 11:31:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:33.989 11:31:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:33.989 11:31:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:33.989 11:31:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:33.989 11:31:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:34.247 11:31:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:34.247 11:31:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:34.247 11:31:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:34.247 11:31:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:34.247 11:31:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:34.247 11:31:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:34.247 11:31:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:34.247 11:31:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:34.247 11:31:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61349 00:17:34.248 11:31:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61349 ']' 00:17:34.248 11:31:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61349 00:17:34.248 11:31:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:17:34.248 11:31:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:34.248 11:31:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61349 00:17:34.248 11:31:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:34.248 11:31:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:34.248 11:31:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61349' 00:17:34.248 killing process with pid 61349 00:17:34.248 11:31:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61349 00:17:34.248 11:31:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61349 00:17:35.719 11:31:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:17:35.719 00:17:35.719 real 0m13.581s 00:17:35.719 user 0m19.398s 00:17:35.719 sys 0m4.425s 00:17:35.719 11:31:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:35.719 11:31:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:35.719 ************************************ 00:17:35.719 END TEST bdev_nbd 00:17:35.719 ************************************ 00:17:35.719 11:31:41 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:17:35.719 11:31:41 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:17:35.719 skipping fio tests on NVMe due to multi-ns failures. 00:17:35.719 11:31:41 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:17:35.719 11:31:41 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:35.719 11:31:41 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:35.719 11:31:41 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:17:35.719 11:31:41 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:35.719 11:31:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:35.719 ************************************ 00:17:35.719 START TEST bdev_verify 00:17:35.719 ************************************ 00:17:35.719 11:31:41 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:35.719 [2024-11-20 11:31:41.224005] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:17:35.719 [2024-11-20 11:31:41.224176] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61757 ] 00:17:35.719 [2024-11-20 11:31:41.421620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:35.978 [2024-11-20 11:31:41.580574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.978 [2024-11-20 11:31:41.580572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.912 Running I/O for 5 seconds... 00:17:39.221 21440.00 IOPS, 83.75 MiB/s [2024-11-20T11:31:45.554Z] 20448.00 IOPS, 79.88 MiB/s [2024-11-20T11:31:46.930Z] 19541.33 IOPS, 76.33 MiB/s [2024-11-20T11:31:47.506Z] 19248.00 IOPS, 75.19 MiB/s [2024-11-20T11:31:47.506Z] 19046.40 IOPS, 74.40 MiB/s 00:17:41.740 Latency(us) 00:17:41.740 [2024-11-20T11:31:47.506Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.740 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:41.740 Verification LBA range: start 0x0 length 0xbd0bd 00:17:41.740 Nvme0n1 : 5.05 1544.65 6.03 0.00 0.00 82586.54 17635.14 77689.95 00:17:41.740 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:41.740 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:17:41.740 Nvme0n1 : 5.06 1592.55 6.22 0.00 0.00 80189.43 16443.58 77213.32 00:17:41.740 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:41.740 Verification LBA range: start 0x0 length 0xa0000 00:17:41.740 Nvme1n1 : 5.06 1544.10 6.03 0.00 0.00 82467.56 20256.58 74830.20 00:17:41.740 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:41.740 Verification LBA range: start 0xa0000 length 0xa0000 00:17:41.740 Nvme1n1 : 5.07 1591.96 6.22 0.00 0.00 80064.34 16920.20 71970.44 00:17:41.740 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:41.740 Verification LBA range: start 0x0 length 0x80000 00:17:41.740 Nvme2n1 : 5.06 1543.46 6.03 0.00 0.00 82297.51 20494.89 72923.69 00:17:41.740 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:41.740 Verification LBA range: start 0x80000 length 0x80000 00:17:41.740 Nvme2n1 : 5.07 1591.39 6.22 0.00 0.00 79895.63 16324.42 68157.44 00:17:41.740 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:41.740 Verification LBA range: start 0x0 length 0x80000 00:17:41.740 Nvme2n2 : 5.06 1542.81 6.03 0.00 0.00 82180.18 19660.80 73876.95 00:17:41.740 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:41.740 Verification LBA range: start 0x80000 length 0x80000 00:17:41.740 Nvme2n2 : 5.07 1590.81 6.21 0.00 0.00 79745.64 16443.58 70063.94 00:17:41.740 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:41.740 Verification LBA range: start 0x0 length 0x80000 00:17:41.740 Nvme2n3 : 5.08 1550.94 6.06 0.00 0.00 81644.52 4617.31 76260.07 00:17:41.740 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:41.740 Verification LBA range: start 0x80000 length 0x80000 00:17:41.741 Nvme2n3 : 5.07 1590.20 6.21 0.00 0.00 79613.69 16443.58 73400.32 00:17:41.741 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:41.741 Verification LBA range: start 0x0 length 0x20000 00:17:41.741 Nvme3n1 : 5.09 1558.60 6.09 0.00 0.00 81178.22 10962.39 77213.32 00:17:41.741 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:41.741 Verification LBA range: start 0x20000 length 0x20000 00:17:41.741 Nvme3n1 : 5.07 1589.62 6.21 0.00 0.00 79479.94 10604.92 76260.07 00:17:41.741 [2024-11-20T11:31:47.507Z] =================================================================================================================== 00:17:41.741 [2024-11-20T11:31:47.507Z] Total : 18831.10 73.56 0.00 0.00 80928.12 4617.31 77689.95 00:17:43.117 00:17:43.117 real 0m7.682s 00:17:43.117 user 0m14.052s 00:17:43.117 sys 0m0.360s 00:17:43.117 11:31:48 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:43.117 11:31:48 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:43.117 ************************************ 00:17:43.117 END TEST bdev_verify 00:17:43.117 ************************************ 00:17:43.117 11:31:48 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:43.117 11:31:48 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:17:43.117 11:31:48 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:43.117 11:31:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:43.117 ************************************ 00:17:43.117 START TEST bdev_verify_big_io 00:17:43.117 ************************************ 00:17:43.117 11:31:48 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:43.375 [2024-11-20 11:31:48.954031] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:17:43.375 [2024-11-20 11:31:48.954206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61859 ] 00:17:43.633 [2024-11-20 11:31:49.148490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:43.633 [2024-11-20 11:31:49.292363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.633 [2024-11-20 11:31:49.292371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.567 Running I/O for 5 seconds... 00:17:49.239 1732.00 IOPS, 108.25 MiB/s [2024-11-20T11:31:55.979Z] 2181.50 IOPS, 136.34 MiB/s [2024-11-20T11:31:56.238Z] 2333.67 IOPS, 145.85 MiB/s [2024-11-20T11:31:56.238Z] 2286.75 IOPS, 142.92 MiB/s 00:17:50.472 Latency(us) 00:17:50.472 [2024-11-20T11:31:56.238Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.472 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:50.472 Verification LBA range: start 0x0 length 0xbd0b 00:17:50.472 Nvme0n1 : 5.67 124.06 7.75 0.00 0.00 994469.19 21567.30 976128.93 00:17:50.472 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:50.472 Verification LBA range: start 0xbd0b length 0xbd0b 00:17:50.472 Nvme0n1 : 5.65 117.17 7.32 0.00 0.00 1038101.94 22163.08 1159153.11 00:17:50.472 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:50.472 Verification LBA range: start 0x0 length 0xa000 00:17:50.472 Nvme1n1 : 5.75 129.67 8.10 0.00 0.00 934909.45 32410.53 922746.88 00:17:50.472 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:50.472 Verification LBA range: start 0xa000 length 0xa000 00:17:50.472 Nvme1n1 : 5.74 122.63 7.66 0.00 0.00 975644.52 87222.46 983754.94 00:17:50.472 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:50.472 Verification LBA range: start 0x0 length 0x8000 00:17:50.472 Nvme2n1 : 5.75 129.80 8.11 0.00 0.00 908290.68 32410.53 945624.90 00:17:50.472 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:50.472 Verification LBA range: start 0x8000 length 0x8000 00:17:50.472 Nvme2n1 : 5.86 124.37 7.77 0.00 0.00 925791.66 68157.44 945624.90 00:17:50.472 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:50.472 Verification LBA range: start 0x0 length 0x8000 00:17:50.472 Nvme2n2 : 5.76 133.39 8.34 0.00 0.00 862893.30 43849.54 968502.92 00:17:50.472 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:50.472 Verification LBA range: start 0x8000 length 0x8000 00:17:50.472 Nvme2n2 : 5.86 122.27 7.64 0.00 0.00 910255.07 49092.42 1799737.72 00:17:50.472 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:50.472 Verification LBA range: start 0x0 length 0x8000 00:17:50.472 Nvme2n3 : 5.76 133.34 8.33 0.00 0.00 838098.54 44802.79 991380.95 00:17:50.472 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:50.472 Verification LBA range: start 0x8000 length 0x8000 00:17:50.472 Nvme2n3 : 5.93 132.47 8.28 0.00 0.00 812908.59 28359.21 1837867.75 00:17:50.472 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:50.472 Verification LBA range: start 0x0 length 0x2000 00:17:50.472 Nvme3n1 : 5.86 152.83 9.55 0.00 0.00 715684.06 2934.23 1006632.96 00:17:50.472 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:50.472 Verification LBA range: start 0x2000 length 0x2000 00:17:50.472 Nvme3n1 : 5.99 168.70 10.54 0.00 0.00 624909.76 1057.51 1395559.33 00:17:50.472 [2024-11-20T11:31:56.238Z] =================================================================================================================== 00:17:50.472 [2024-11-20T11:31:56.238Z] Total : 1590.69 99.42 0.00 0.00 865557.34 1057.51 1837867.75 00:17:52.374 00:17:52.375 real 0m8.840s 00:17:52.375 user 0m16.386s 00:17:52.375 sys 0m0.382s 00:17:52.375 11:31:57 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:52.375 11:31:57 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:17:52.375 ************************************ 00:17:52.375 END TEST bdev_verify_big_io 00:17:52.375 ************************************ 00:17:52.375 11:31:57 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:52.375 11:31:57 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:52.375 11:31:57 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:52.375 11:31:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:52.375 ************************************ 00:17:52.375 START TEST bdev_write_zeroes 00:17:52.375 ************************************ 00:17:52.375 11:31:57 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:52.375 [2024-11-20 11:31:57.848749] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:17:52.375 [2024-11-20 11:31:57.848950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61975 ] 00:17:52.375 [2024-11-20 11:31:58.034269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.634 [2024-11-20 11:31:58.156878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.201 Running I/O for 1 seconds... 00:17:54.136 52608.00 IOPS, 205.50 MiB/s 00:17:54.136 Latency(us) 00:17:54.136 [2024-11-20T11:31:59.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.136 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:54.136 Nvme0n1 : 1.03 8715.55 34.05 0.00 0.00 14649.52 6702.55 28716.68 00:17:54.136 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:54.136 Nvme1n1 : 1.03 8702.28 33.99 0.00 0.00 14646.24 7268.54 27763.43 00:17:54.136 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:54.136 Nvme2n1 : 1.03 8689.16 33.94 0.00 0.00 14587.61 7208.96 26929.34 00:17:54.136 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:54.136 Nvme2n2 : 1.03 8676.27 33.89 0.00 0.00 14582.91 7566.43 26095.24 00:17:54.136 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:54.136 Nvme2n3 : 1.03 8662.95 33.84 0.00 0.00 14576.66 7268.54 26333.56 00:17:54.136 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:54.136 Nvme3n1 : 1.04 8650.10 33.79 0.00 0.00 14571.80 6970.65 28955.00 00:17:54.136 [2024-11-20T11:31:59.902Z] =================================================================================================================== 00:17:54.136 [2024-11-20T11:31:59.902Z] Total : 52096.30 203.50 0.00 0.00 14602.46 6702.55 28955.00 00:17:55.512 00:17:55.512 real 0m3.250s 00:17:55.512 user 0m2.843s 00:17:55.512 sys 0m0.284s 00:17:55.512 11:32:00 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:55.512 11:32:00 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:17:55.512 ************************************ 00:17:55.512 END TEST bdev_write_zeroes 00:17:55.512 ************************************ 00:17:55.512 11:32:01 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:55.512 11:32:01 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:55.512 11:32:01 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:55.512 11:32:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:55.512 ************************************ 00:17:55.512 START TEST bdev_json_nonenclosed 00:17:55.512 ************************************ 00:17:55.512 11:32:01 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:55.512 [2024-11-20 11:32:01.130689] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:17:55.512 [2024-11-20 11:32:01.130830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62028 ] 00:17:55.770 [2024-11-20 11:32:01.304743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.770 [2024-11-20 11:32:01.423160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.770 [2024-11-20 11:32:01.423285] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:17:55.770 [2024-11-20 11:32:01.423315] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:55.770 [2024-11-20 11:32:01.423330] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:56.028 00:17:56.028 real 0m0.635s 00:17:56.028 user 0m0.394s 00:17:56.028 sys 0m0.136s 00:17:56.028 11:32:01 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:56.028 11:32:01 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:17:56.028 ************************************ 00:17:56.028 END TEST bdev_json_nonenclosed 00:17:56.028 ************************************ 00:17:56.028 11:32:01 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:56.028 11:32:01 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:56.028 11:32:01 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:56.028 11:32:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:56.028 ************************************ 00:17:56.028 START TEST bdev_json_nonarray 00:17:56.028 ************************************ 00:17:56.028 11:32:01 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:56.287 [2024-11-20 11:32:01.832989] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:17:56.287 [2024-11-20 11:32:01.833170] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62062 ] 00:17:56.287 [2024-11-20 11:32:02.018329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.546 [2024-11-20 11:32:02.143864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.546 [2024-11-20 11:32:02.143985] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:17:56.546 [2024-11-20 11:32:02.144031] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:56.546 [2024-11-20 11:32:02.144045] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:56.804 00:17:56.804 real 0m0.690s 00:17:56.804 user 0m0.433s 00:17:56.804 sys 0m0.151s 00:17:56.804 11:32:02 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:56.804 11:32:02 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:17:56.804 ************************************ 00:17:56.804 END TEST bdev_json_nonarray 00:17:56.804 ************************************ 00:17:56.804 11:32:02 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:17:56.804 11:32:02 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:17:56.804 11:32:02 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:17:56.804 11:32:02 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:17:56.804 11:32:02 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:17:56.804 11:32:02 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:17:56.804 11:32:02 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:56.804 11:32:02 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:17:56.804 11:32:02 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:17:56.804 11:32:02 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:17:56.804 11:32:02 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:17:56.804 00:17:56.804 real 0m44.512s 00:17:56.804 user 1m7.136s 00:17:56.804 sys 0m7.428s 00:17:56.804 ************************************ 00:17:56.804 END TEST blockdev_nvme 00:17:56.804 ************************************ 00:17:56.804 11:32:02 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:56.804 11:32:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:56.804 11:32:02 -- spdk/autotest.sh@209 -- # uname -s 00:17:56.804 11:32:02 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:17:56.804 11:32:02 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:17:56.804 11:32:02 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:56.804 11:32:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:56.804 11:32:02 -- common/autotest_common.sh@10 -- # set +x 00:17:56.804 ************************************ 00:17:56.804 START TEST blockdev_nvme_gpt 00:17:56.804 ************************************ 00:17:56.804 11:32:02 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:17:57.064 * Looking for test storage... 00:17:57.064 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:17:57.064 11:32:02 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:57.064 11:32:02 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:17:57.064 11:32:02 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:57.064 11:32:02 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:57.064 11:32:02 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:57.064 11:32:02 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:57.064 11:32:02 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:57.064 11:32:02 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:17:57.064 11:32:02 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:17:57.064 11:32:02 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:17:57.064 11:32:02 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:17:57.064 11:32:02 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:17:57.064 11:32:02 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:17:57.064 11:32:02 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:17:57.064 11:32:02 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:57.064 11:32:02 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:17:57.064 11:32:02 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:17:57.064 11:32:02 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:57.064 11:32:02 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:57.064 11:32:02 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:17:57.064 11:32:02 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:17:57.064 11:32:02 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:57.064 11:32:02 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:17:57.064 11:32:02 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:17:57.064 11:32:02 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:17:57.064 11:32:02 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:17:57.064 11:32:02 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:57.064 11:32:02 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:17:57.064 11:32:02 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:17:57.064 11:32:02 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:57.064 11:32:02 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:57.064 11:32:02 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:17:57.064 11:32:02 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:57.064 11:32:02 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:57.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.064 --rc genhtml_branch_coverage=1 00:17:57.064 --rc genhtml_function_coverage=1 00:17:57.064 --rc genhtml_legend=1 00:17:57.064 --rc geninfo_all_blocks=1 00:17:57.064 --rc geninfo_unexecuted_blocks=1 00:17:57.064 00:17:57.064 ' 00:17:57.064 11:32:02 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:57.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.064 --rc genhtml_branch_coverage=1 00:17:57.064 --rc genhtml_function_coverage=1 00:17:57.064 --rc genhtml_legend=1 00:17:57.064 --rc geninfo_all_blocks=1 00:17:57.064 --rc geninfo_unexecuted_blocks=1 00:17:57.064 00:17:57.064 ' 00:17:57.064 11:32:02 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:57.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.064 --rc genhtml_branch_coverage=1 00:17:57.064 --rc genhtml_function_coverage=1 00:17:57.064 --rc genhtml_legend=1 00:17:57.064 --rc geninfo_all_blocks=1 00:17:57.064 --rc geninfo_unexecuted_blocks=1 00:17:57.064 00:17:57.064 ' 00:17:57.064 11:32:02 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:57.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.064 --rc genhtml_branch_coverage=1 00:17:57.064 --rc genhtml_function_coverage=1 00:17:57.064 --rc genhtml_legend=1 00:17:57.064 --rc geninfo_all_blocks=1 00:17:57.064 --rc geninfo_unexecuted_blocks=1 00:17:57.064 00:17:57.064 ' 00:17:57.064 11:32:02 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:17:57.064 11:32:02 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:17:57.064 11:32:02 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:17:57.064 11:32:02 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:57.064 11:32:02 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:17:57.064 11:32:02 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:17:57.064 11:32:02 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:17:57.064 11:32:02 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:17:57.064 11:32:02 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:17:57.064 11:32:02 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:17:57.064 11:32:02 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:17:57.064 11:32:02 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:17:57.064 11:32:02 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:17:57.064 11:32:02 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:17:57.064 11:32:02 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:17:57.064 11:32:02 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:17:57.064 11:32:02 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:17:57.064 11:32:02 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:17:57.064 11:32:02 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:17:57.064 11:32:02 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:17:57.064 11:32:02 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:17:57.064 11:32:02 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:17:57.064 11:32:02 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:17:57.064 11:32:02 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:17:57.064 11:32:02 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62145 00:17:57.064 11:32:02 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:17:57.064 11:32:02 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62145 00:17:57.064 11:32:02 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 62145 ']' 00:17:57.064 11:32:02 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:17:57.064 11:32:02 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.064 11:32:02 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:57.064 11:32:02 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.064 11:32:02 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:57.064 11:32:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:17:57.323 [2024-11-20 11:32:02.849522] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:17:57.323 [2024-11-20 11:32:02.849740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62145 ] 00:17:57.323 [2024-11-20 11:32:03.034729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.580 [2024-11-20 11:32:03.155748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.514 11:32:03 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:58.514 11:32:03 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:17:58.514 11:32:03 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:17:58.514 11:32:03 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:17:58.514 11:32:03 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:58.772 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:58.772 Waiting for block devices as requested 00:17:59.030 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:59.030 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:59.030 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:17:59.030 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:18:04.299 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:18:04.299 11:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:18:04.299 11:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:18:04.299 11:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:18:04.299 11:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:18:04.299 11:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:04.299 11:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:18:04.299 11:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:18:04.299 11:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:04.299 11:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:04.299 11:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:04.299 11:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:18:04.299 11:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:18:04.299 11:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:04.299 11:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:04.299 11:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:04.299 11:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:18:04.299 11:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:18:04.299 11:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:18:04.299 11:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:04.299 11:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:04.299 11:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:18:04.299 11:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:18:04.299 11:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:18:04.299 11:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:04.299 11:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:04.299 11:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:18:04.299 11:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:18:04.299 11:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:18:04.299 11:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:04.300 11:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:04.300 11:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:18:04.300 11:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:18:04.300 11:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:18:04.300 11:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:04.300 11:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:04.300 11:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:18:04.300 11:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:18:04.300 11:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:18:04.300 11:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:04.300 11:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:18:04.300 11:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:18:04.300 11:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:18:04.300 11:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:18:04.300 11:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:18:04.300 11:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:18:04.300 11:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:18:04.300 11:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:18:04.300 BYT; 00:18:04.300 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:18:04.300 11:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:18:04.300 BYT; 00:18:04.300 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:18:04.300 11:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:18:04.300 11:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:18:04.300 11:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:18:04.300 11:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:18:04.300 11:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:18:04.300 11:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:18:04.300 11:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:18:04.300 11:32:09 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:18:04.300 11:32:09 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:18:04.300 11:32:09 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:18:04.300 11:32:09 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:18:04.300 11:32:09 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:18:04.300 11:32:09 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:18:04.300 11:32:09 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:18:04.300 11:32:09 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:18:04.300 11:32:09 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:18:04.300 11:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:18:04.300 11:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:18:04.300 11:32:09 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:18:04.300 11:32:09 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:18:04.300 11:32:09 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:18:04.300 11:32:09 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:18:04.300 11:32:09 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:18:04.300 11:32:09 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:18:04.300 11:32:09 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:18:04.300 11:32:09 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:18:04.300 11:32:09 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:18:04.300 11:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:18:04.300 11:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:18:05.680 The operation has completed successfully. 00:18:05.680 11:32:11 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:18:06.615 The operation has completed successfully. 00:18:06.615 11:32:12 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:06.874 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:07.441 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:07.441 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:07.700 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:18:07.700 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:18:07.700 11:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:18:07.700 11:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.700 11:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:07.700 [] 00:18:07.700 11:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.700 11:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:18:07.700 11:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:18:07.700 11:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:18:07.700 11:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:07.700 11:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:18:07.700 11:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.700 11:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:07.960 11:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.960 11:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:18:07.960 11:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.960 11:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:07.960 11:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.960 11:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:18:07.960 11:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:18:07.960 11:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.960 11:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:07.960 11:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.960 11:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:18:07.960 11:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.960 11:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:08.233 11:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.233 11:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:08.233 11:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.233 11:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:08.233 11:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.233 11:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:18:08.233 11:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:18:08.233 11:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.233 11:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:18:08.233 11:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:08.233 11:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.233 11:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:18:08.233 11:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:18:08.234 11:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "3a9dac79-390c-44b5-bc42-e3491eb0dddc"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "3a9dac79-390c-44b5-bc42-e3491eb0dddc",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "29cfcd72-abc1-4659-ac6a-cf6e39dfcc69"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "29cfcd72-abc1-4659-ac6a-cf6e39dfcc69",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "d5114c98-0f16-461d-a223-8d0f9ad4dd9f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d5114c98-0f16-461d-a223-8d0f9ad4dd9f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "b3f04d67-005b-4cfc-a936-2f4975391de7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b3f04d67-005b-4cfc-a936-2f4975391de7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "847a7bfe-7fc1-4af9-b247-4b211e932bcf"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "847a7bfe-7fc1-4af9-b247-4b211e932bcf",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:18:08.234 11:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:18:08.234 11:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:18:08.234 11:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:18:08.234 11:32:13 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 62145 00:18:08.234 11:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 62145 ']' 00:18:08.234 11:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 62145 00:18:08.234 11:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:18:08.234 11:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:08.234 11:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62145 00:18:08.234 11:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:08.234 11:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:08.234 killing process with pid 62145 00:18:08.234 11:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62145' 00:18:08.234 11:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 62145 00:18:08.234 11:32:13 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 62145 00:18:10.766 11:32:16 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:10.766 11:32:16 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:18:10.766 11:32:16 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:10.766 11:32:16 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:10.766 11:32:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:10.766 ************************************ 00:18:10.766 START TEST bdev_hello_world 00:18:10.766 ************************************ 00:18:10.766 11:32:16 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:18:10.766 [2024-11-20 11:32:16.243953] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:18:10.766 [2024-11-20 11:32:16.244171] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62778 ] 00:18:10.766 [2024-11-20 11:32:16.428242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.024 [2024-11-20 11:32:16.562053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.591 [2024-11-20 11:32:17.227982] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:11.591 [2024-11-20 11:32:17.228056] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:18:11.591 [2024-11-20 11:32:17.228094] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:11.591 [2024-11-20 11:32:17.231413] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:11.591 [2024-11-20 11:32:17.231903] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:11.591 [2024-11-20 11:32:17.231957] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:11.591 [2024-11-20 11:32:17.232150] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:11.591 00:18:11.591 [2024-11-20 11:32:17.232202] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:18:12.548 00:18:12.548 real 0m2.127s 00:18:12.548 user 0m1.733s 00:18:12.548 sys 0m0.282s 00:18:12.548 11:32:18 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:12.548 11:32:18 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:18:12.548 ************************************ 00:18:12.548 END TEST bdev_hello_world 00:18:12.548 ************************************ 00:18:12.548 11:32:18 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:18:12.548 11:32:18 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:12.548 11:32:18 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:12.548 11:32:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:12.548 ************************************ 00:18:12.548 START TEST bdev_bounds 00:18:12.548 ************************************ 00:18:12.816 11:32:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:18:12.816 11:32:18 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62820 00:18:12.816 11:32:18 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:18:12.816 11:32:18 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:12.816 Process bdevio pid: 62820 00:18:12.816 11:32:18 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62820' 00:18:12.816 11:32:18 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62820 00:18:12.816 11:32:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 62820 ']' 00:18:12.816 11:32:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.816 11:32:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:12.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:12.816 11:32:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.816 11:32:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:12.816 11:32:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:12.816 [2024-11-20 11:32:18.425312] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:18:12.816 [2024-11-20 11:32:18.425521] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62820 ] 00:18:13.112 [2024-11-20 11:32:18.608812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:13.112 [2024-11-20 11:32:18.744559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:13.113 [2024-11-20 11:32:18.744627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:13.113 [2024-11-20 11:32:18.744630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.679 11:32:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:13.679 11:32:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:18:13.679 11:32:19 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:18:13.938 I/O targets: 00:18:13.938 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:18:13.938 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:18:13.938 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:18:13.938 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:18:13.938 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:18:13.938 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:18:13.938 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:18:13.938 00:18:13.938 00:18:13.938 CUnit - A unit testing framework for C - Version 2.1-3 00:18:13.938 http://cunit.sourceforge.net/ 00:18:13.938 00:18:13.938 00:18:13.938 Suite: bdevio tests on: Nvme3n1 00:18:13.938 Test: blockdev write read block ...passed 00:18:13.938 Test: blockdev write zeroes read block ...passed 00:18:13.938 Test: blockdev write zeroes read no split ...passed 00:18:13.938 Test: blockdev write zeroes read split ...passed 00:18:13.938 Test: blockdev write zeroes read split partial ...passed 00:18:13.938 Test: blockdev reset ...[2024-11-20 11:32:19.605544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:18:13.938 [2024-11-20 11:32:19.609725] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:18:13.938 passed 00:18:13.938 Test: blockdev write read 8 blocks ...passed 00:18:13.938 Test: blockdev write read size > 128k ...passed 00:18:13.938 Test: blockdev write read invalid size ...passed 00:18:13.938 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:13.938 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:13.938 Test: blockdev write read max offset ...passed 00:18:13.938 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:13.938 Test: blockdev writev readv 8 blocks ...passed 00:18:13.938 Test: blockdev writev readv 30 x 1block ...passed 00:18:13.938 Test: blockdev writev readv block ...passed 00:18:13.938 Test: blockdev writev readv size > 128k ...passed 00:18:13.938 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:13.938 Test: blockdev comparev and writev ...passed 00:18:13.938 Test: blockdev nvme passthru rw ...[2024-11-20 11:32:19.618001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c1004000 len:0x1000 00:18:13.938 [2024-11-20 11:32:19.618057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:18:13.938 passed 00:18:13.938 Test: blockdev nvme passthru vendor specific ...[2024-11-20 11:32:19.618788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:18:13.938 [2024-11-20 11:32:19.618829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:18:13.938 passed 00:18:13.938 Test: blockdev nvme admin passthru ...passed 00:18:13.938 Test: blockdev copy ...passed 00:18:13.938 Suite: bdevio tests on: Nvme2n3 00:18:13.938 Test: blockdev write read block ...passed 00:18:13.938 Test: blockdev write zeroes read block ...passed 00:18:13.938 Test: blockdev write zeroes read no split ...passed 00:18:13.938 Test: blockdev write zeroes read split ...passed 00:18:13.938 Test: blockdev write zeroes read split partial ...passed 00:18:13.938 Test: blockdev reset ...[2024-11-20 11:32:19.698742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:18:14.197 [2024-11-20 11:32:19.703068] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:18:14.197 passed 00:18:14.197 Test: blockdev write read 8 blocks ...passed 00:18:14.197 Test: blockdev write read size > 128k ...passed 00:18:14.197 Test: blockdev write read invalid size ...passed 00:18:14.197 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:14.197 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:14.197 Test: blockdev write read max offset ...passed 00:18:14.197 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:14.197 Test: blockdev writev readv 8 blocks ...passed 00:18:14.197 Test: blockdev writev readv 30 x 1block ...passed 00:18:14.197 Test: blockdev writev readv block ...passed 00:18:14.197 Test: blockdev writev readv size > 128k ...passed 00:18:14.197 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:14.197 Test: blockdev comparev and writev ...passed 00:18:14.197 Test: blockdev nvme passthru rw ...[2024-11-20 11:32:19.712595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c1002000 len:0x1000 00:18:14.197 [2024-11-20 11:32:19.712655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:18:14.197 passed 00:18:14.197 Test: blockdev nvme passthru vendor specific ...passed 00:18:14.197 Test: blockdev nvme admin passthru ...[2024-11-20 11:32:19.713488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:18:14.197 [2024-11-20 11:32:19.713552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:18:14.197 passed 00:18:14.197 Test: blockdev copy ...passed 00:18:14.197 Suite: bdevio tests on: Nvme2n2 00:18:14.197 Test: blockdev write read block ...passed 00:18:14.197 Test: blockdev write zeroes read block ...passed 00:18:14.197 Test: blockdev write zeroes read no split ...passed 00:18:14.197 Test: blockdev write zeroes read split ...passed 00:18:14.197 Test: blockdev write zeroes read split partial ...passed 00:18:14.197 Test: blockdev reset ...[2024-11-20 11:32:19.790835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:18:14.197 [2024-11-20 11:32:19.795241] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:18:14.197 passed 00:18:14.197 Test: blockdev write read 8 blocks ...passed 00:18:14.197 Test: blockdev write read size > 128k ...passed 00:18:14.197 Test: blockdev write read invalid size ...passed 00:18:14.197 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:14.197 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:14.197 Test: blockdev write read max offset ...passed 00:18:14.197 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:14.197 Test: blockdev writev readv 8 blocks ...passed 00:18:14.197 Test: blockdev writev readv 30 x 1block ...passed 00:18:14.197 Test: blockdev writev readv block ...passed 00:18:14.197 Test: blockdev writev readv size > 128k ...passed 00:18:14.197 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:14.197 Test: blockdev comparev and writev ...[2024-11-20 11:32:19.804187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d3e38000 len:0x1000 00:18:14.197 [2024-11-20 11:32:19.804247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:18:14.197 passed 00:18:14.197 Test: blockdev nvme passthru rw ...passed 00:18:14.197 Test: blockdev nvme passthru vendor specific ...passed 00:18:14.197 Test: blockdev nvme admin passthru ...[2024-11-20 11:32:19.805172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:18:14.197 [2024-11-20 11:32:19.805221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:18:14.197 passed 00:18:14.197 Test: blockdev copy ...passed 00:18:14.197 Suite: bdevio tests on: Nvme2n1 00:18:14.197 Test: blockdev write read block ...passed 00:18:14.197 Test: blockdev write zeroes read block ...passed 00:18:14.197 Test: blockdev write zeroes read no split ...passed 00:18:14.197 Test: blockdev write zeroes read split ...passed 00:18:14.197 Test: blockdev write zeroes read split partial ...passed 00:18:14.197 Test: blockdev reset ...[2024-11-20 11:32:19.879667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:18:14.197 [2024-11-20 11:32:19.883895] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:18:14.197 Test: blockdev write read 8 blocks ...uccessful. 00:18:14.197 passed 00:18:14.197 Test: blockdev write read size > 128k ...passed 00:18:14.197 Test: blockdev write read invalid size ...passed 00:18:14.197 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:14.197 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:14.197 Test: blockdev write read max offset ...passed 00:18:14.197 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:14.197 Test: blockdev writev readv 8 blocks ...passed 00:18:14.197 Test: blockdev writev readv 30 x 1block ...passed 00:18:14.197 Test: blockdev writev readv block ...passed 00:18:14.197 Test: blockdev writev readv size > 128k ...passed 00:18:14.197 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:14.197 Test: blockdev comparev and writev ...[2024-11-20 11:32:19.892815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d3e34000 len:0x1000 00:18:14.197 [2024-11-20 11:32:19.892887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:18:14.197 passed 00:18:14.197 Test: blockdev nvme passthru rw ...passed 00:18:14.197 Test: blockdev nvme passthru vendor specific ...[2024-11-20 11:32:19.893707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:18:14.197 [2024-11-20 11:32:19.893750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:18:14.197 passed 00:18:14.197 Test: blockdev nvme admin passthru ...passed 00:18:14.197 Test: blockdev copy ...passed 00:18:14.197 Suite: bdevio tests on: Nvme1n1p2 00:18:14.197 Test: blockdev write read block ...passed 00:18:14.197 Test: blockdev write zeroes read block ...passed 00:18:14.197 Test: blockdev write zeroes read no split ...passed 00:18:14.197 Test: blockdev write zeroes read split ...passed 00:18:14.456 Test: blockdev write zeroes read split partial ...passed 00:18:14.456 Test: blockdev reset ...[2024-11-20 11:32:19.971148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:18:14.456 passed 00:18:14.456 Test: blockdev write read 8 blocks ...[2024-11-20 11:32:19.974961] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:18:14.456 passed 00:18:14.456 Test: blockdev write read size > 128k ...passed 00:18:14.456 Test: blockdev write read invalid size ...passed 00:18:14.456 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:14.456 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:14.456 Test: blockdev write read max offset ...passed 00:18:14.456 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:14.456 Test: blockdev writev readv 8 blocks ...passed 00:18:14.456 Test: blockdev writev readv 30 x 1block ...passed 00:18:14.457 Test: blockdev writev readv block ...passed 00:18:14.457 Test: blockdev writev readv size > 128k ...passed 00:18:14.457 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:14.457 Test: blockdev comparev and writev ...[2024-11-20 11:32:19.983687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d3e30000 len:0x1000 00:18:14.457 [2024-11-20 11:32:19.983746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:18:14.457 passed 00:18:14.457 Test: blockdev nvme passthru rw ...passed 00:18:14.457 Test: blockdev nvme passthru vendor specific ...passed 00:18:14.457 Test: blockdev nvme admin passthru ...passed 00:18:14.457 Test: blockdev copy ...passed 00:18:14.457 Suite: bdevio tests on: Nvme1n1p1 00:18:14.457 Test: blockdev write read block ...passed 00:18:14.457 Test: blockdev write zeroes read block ...passed 00:18:14.457 Test: blockdev write zeroes read no split ...passed 00:18:14.457 Test: blockdev write zeroes read split ...passed 00:18:14.457 Test: blockdev write zeroes read split partial ...passed 00:18:14.457 Test: blockdev reset ...[2024-11-20 11:32:20.068056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:18:14.457 [2024-11-20 11:32:20.071834] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:18:14.457 passed 00:18:14.457 Test: blockdev write read 8 blocks ...passed 00:18:14.457 Test: blockdev write read size > 128k ...passed 00:18:14.457 Test: blockdev write read invalid size ...passed 00:18:14.457 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:14.457 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:14.457 Test: blockdev write read max offset ...passed 00:18:14.457 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:14.457 Test: blockdev writev readv 8 blocks ...passed 00:18:14.457 Test: blockdev writev readv 30 x 1block ...passed 00:18:14.457 Test: blockdev writev readv block ...passed 00:18:14.457 Test: blockdev writev readv size > 128k ...passed 00:18:14.457 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:14.457 Test: blockdev comparev and writev ...[2024-11-20 11:32:20.083047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2c1a0e000 len:0x1000 00:18:14.457 [2024-11-20 11:32:20.083106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:18:14.457 passed 00:18:14.457 Test: blockdev nvme passthru rw ...passed 00:18:14.457 Test: blockdev nvme passthru vendor specific ...passed 00:18:14.457 Test: blockdev nvme admin passthru ...passed 00:18:14.457 Test: blockdev copy ...passed 00:18:14.457 Suite: bdevio tests on: Nvme0n1 00:18:14.457 Test: blockdev write read block ...passed 00:18:14.457 Test: blockdev write zeroes read block ...passed 00:18:14.457 Test: blockdev write zeroes read no split ...passed 00:18:14.457 Test: blockdev write zeroes read split ...passed 00:18:14.457 Test: blockdev write zeroes read split partial ...passed 00:18:14.457 Test: blockdev reset ...[2024-11-20 11:32:20.153141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:18:14.457 [2024-11-20 11:32:20.156738] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:18:14.457 passed 00:18:14.457 Test: blockdev write read 8 blocks ...passed 00:18:14.457 Test: blockdev write read size > 128k ...passed 00:18:14.457 Test: blockdev write read invalid size ...passed 00:18:14.457 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:14.457 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:14.457 Test: blockdev write read max offset ...passed 00:18:14.457 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:14.457 Test: blockdev writev readv 8 blocks ...passed 00:18:14.457 Test: blockdev writev readv 30 x 1block ...passed 00:18:14.457 Test: blockdev writev readv block ...passed 00:18:14.457 Test: blockdev writev readv size > 128k ...passed 00:18:14.457 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:14.457 Test: blockdev comparev and writev ...passed 00:18:14.457 Test: blockdev nvme passthru rw ...[2024-11-20 11:32:20.164849] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:18:14.457 separate metadata which is not supported yet. 00:18:14.457 passed 00:18:14.457 Test: blockdev nvme passthru vendor specific ...[2024-11-20 11:32:20.165367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 Ppassed 00:18:14.457 Test: blockdev nvme admin passthru ...RP2 0x0 00:18:14.457 [2024-11-20 11:32:20.165546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:18:14.457 passed 00:18:14.457 Test: blockdev copy ...passed 00:18:14.457 00:18:14.457 Run Summary: Type Total Ran Passed Failed Inactive 00:18:14.457 suites 7 7 n/a 0 0 00:18:14.457 tests 161 161 161 0 0 00:18:14.457 asserts 1025 1025 1025 0 n/a 00:18:14.457 00:18:14.457 Elapsed time = 1.696 seconds 00:18:14.457 0 00:18:14.457 11:32:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62820 00:18:14.457 11:32:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 62820 ']' 00:18:14.457 11:32:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 62820 00:18:14.457 11:32:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:18:14.457 11:32:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:14.457 11:32:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62820 00:18:14.714 11:32:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:14.714 killing process with pid 62820 00:18:14.714 11:32:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:14.714 11:32:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62820' 00:18:14.714 11:32:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 62820 00:18:14.714 11:32:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 62820 00:18:15.649 ************************************ 00:18:15.649 END TEST bdev_bounds 00:18:15.649 ************************************ 00:18:15.649 11:32:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:18:15.649 00:18:15.649 real 0m2.885s 00:18:15.649 user 0m7.347s 00:18:15.649 sys 0m0.436s 00:18:15.649 11:32:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:15.649 11:32:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:15.649 11:32:21 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:18:15.649 11:32:21 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:15.649 11:32:21 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:15.649 11:32:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:15.649 ************************************ 00:18:15.649 START TEST bdev_nbd 00:18:15.649 ************************************ 00:18:15.649 11:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:18:15.649 11:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:18:15.649 11:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:18:15.649 11:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:15.649 11:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:15.649 11:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:18:15.649 11:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:18:15.649 11:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:18:15.649 11:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:18:15.649 11:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:18:15.649 11:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:18:15.649 11:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:18:15.649 11:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:18:15.649 11:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:18:15.649 11:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:18:15.649 11:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:18:15.649 11:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62885 00:18:15.649 11:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:15.649 11:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:18:15.649 11:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62885 /var/tmp/spdk-nbd.sock 00:18:15.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:15.649 11:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 62885 ']' 00:18:15.649 11:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:15.649 11:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.649 11:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:15.649 11:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.649 11:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:15.649 [2024-11-20 11:32:21.351240] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:18:15.649 [2024-11-20 11:32:21.351401] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.907 [2024-11-20 11:32:21.532372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.907 [2024-11-20 11:32:21.665512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.840 11:32:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.840 11:32:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:18:16.840 11:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:18:16.840 11:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:16.840 11:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:18:16.840 11:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:18:16.840 11:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:18:16.840 11:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:16.840 11:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:18:16.840 11:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:18:16.840 11:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:18:16.840 11:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:18:16.840 11:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:18:16.840 11:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:18:16.840 11:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:18:17.097 11:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:18:17.097 11:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:18:17.097 11:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:18:17.097 11:32:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:17.097 11:32:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:17.097 11:32:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:17.097 11:32:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:17.097 11:32:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:17.097 11:32:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:17.097 11:32:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:17.097 11:32:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:17.097 11:32:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:17.097 1+0 records in 00:18:17.097 1+0 records out 00:18:17.097 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0008119 s, 5.0 MB/s 00:18:17.097 11:32:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:17.097 11:32:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:17.097 11:32:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:17.097 11:32:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:17.097 11:32:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:17.097 11:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:17.097 11:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:18:17.097 11:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:18:17.356 11:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:18:17.356 11:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:18:17.356 11:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:18:17.356 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:17.356 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:17.356 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:17.356 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:17.356 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:17.356 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:17.356 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:17.356 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:17.356 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:17.356 1+0 records in 00:18:17.356 1+0 records out 00:18:17.356 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004774 s, 8.6 MB/s 00:18:17.356 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:17.356 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:17.356 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:17.356 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:17.356 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:17.356 11:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:17.356 11:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:18:17.356 11:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:18:17.923 11:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:18:17.923 11:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:18:17.923 11:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:18:17.923 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:18:17.923 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:17.923 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:17.923 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:17.923 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:18:17.923 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:17.923 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:17.923 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:17.923 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:17.923 1+0 records in 00:18:17.923 1+0 records out 00:18:17.923 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000701322 s, 5.8 MB/s 00:18:17.923 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:17.923 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:17.923 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:17.923 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:17.923 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:17.923 11:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:17.923 11:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:18:17.923 11:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:18:18.182 11:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:18:18.182 11:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:18:18.182 11:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:18:18.182 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:18:18.182 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:18.182 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:18.182 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:18.182 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:18:18.182 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:18.182 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:18.182 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:18.182 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:18.182 1+0 records in 00:18:18.182 1+0 records out 00:18:18.182 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000701529 s, 5.8 MB/s 00:18:18.182 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.182 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:18.183 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.183 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:18.183 11:32:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:18.183 11:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:18.183 11:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:18:18.183 11:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:18:18.478 11:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:18:18.478 11:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:18:18.478 11:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:18:18.478 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:18:18.478 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:18.478 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:18.478 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:18.478 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:18:18.478 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:18.478 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:18.478 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:18.478 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:18.478 1+0 records in 00:18:18.478 1+0 records out 00:18:18.478 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00053549 s, 7.6 MB/s 00:18:18.478 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.478 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:18.478 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.478 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:18.478 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:18.478 11:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:18.478 11:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:18:18.478 11:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:18:18.756 11:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:18:18.756 11:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:18:18.756 11:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:18:18.756 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:18:18.756 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:18.756 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:18.756 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:18.756 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:18:18.756 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:18.756 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:18.756 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:18.756 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:18.756 1+0 records in 00:18:18.756 1+0 records out 00:18:18.756 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000934331 s, 4.4 MB/s 00:18:18.756 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.756 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:18.757 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.757 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:18.757 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:18.757 11:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:18.757 11:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:18:18.757 11:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:18:19.324 11:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:18:19.324 11:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:18:19.324 11:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:18:19.324 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:18:19.324 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:19.324 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:19.325 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:19.325 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:18:19.325 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:19.325 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:19.325 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:19.325 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:19.325 1+0 records in 00:18:19.325 1+0 records out 00:18:19.325 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000874074 s, 4.7 MB/s 00:18:19.325 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:19.325 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:19.325 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:19.325 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:19.325 11:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:19.325 11:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:19.325 11:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:18:19.325 11:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:19.584 11:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:18:19.584 { 00:18:19.584 "nbd_device": "/dev/nbd0", 00:18:19.584 "bdev_name": "Nvme0n1" 00:18:19.584 }, 00:18:19.584 { 00:18:19.584 "nbd_device": "/dev/nbd1", 00:18:19.584 "bdev_name": "Nvme1n1p1" 00:18:19.584 }, 00:18:19.584 { 00:18:19.584 "nbd_device": "/dev/nbd2", 00:18:19.584 "bdev_name": "Nvme1n1p2" 00:18:19.584 }, 00:18:19.584 { 00:18:19.584 "nbd_device": "/dev/nbd3", 00:18:19.584 "bdev_name": "Nvme2n1" 00:18:19.584 }, 00:18:19.584 { 00:18:19.584 "nbd_device": "/dev/nbd4", 00:18:19.584 "bdev_name": "Nvme2n2" 00:18:19.584 }, 00:18:19.584 { 00:18:19.584 "nbd_device": "/dev/nbd5", 00:18:19.584 "bdev_name": "Nvme2n3" 00:18:19.584 }, 00:18:19.584 { 00:18:19.584 "nbd_device": "/dev/nbd6", 00:18:19.584 "bdev_name": "Nvme3n1" 00:18:19.584 } 00:18:19.584 ]' 00:18:19.584 11:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:18:19.584 11:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:18:19.584 { 00:18:19.584 "nbd_device": "/dev/nbd0", 00:18:19.584 "bdev_name": "Nvme0n1" 00:18:19.584 }, 00:18:19.584 { 00:18:19.584 "nbd_device": "/dev/nbd1", 00:18:19.584 "bdev_name": "Nvme1n1p1" 00:18:19.584 }, 00:18:19.584 { 00:18:19.584 "nbd_device": "/dev/nbd2", 00:18:19.584 "bdev_name": "Nvme1n1p2" 00:18:19.584 }, 00:18:19.584 { 00:18:19.584 "nbd_device": "/dev/nbd3", 00:18:19.584 "bdev_name": "Nvme2n1" 00:18:19.584 }, 00:18:19.584 { 00:18:19.584 "nbd_device": "/dev/nbd4", 00:18:19.584 "bdev_name": "Nvme2n2" 00:18:19.584 }, 00:18:19.584 { 00:18:19.584 "nbd_device": "/dev/nbd5", 00:18:19.584 "bdev_name": "Nvme2n3" 00:18:19.584 }, 00:18:19.584 { 00:18:19.584 "nbd_device": "/dev/nbd6", 00:18:19.584 "bdev_name": "Nvme3n1" 00:18:19.584 } 00:18:19.584 ]' 00:18:19.584 11:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:18:19.584 11:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:18:19.584 11:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:19.584 11:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:18:19.584 11:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:19.584 11:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:19.584 11:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:19.585 11:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:19.843 11:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:19.843 11:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:19.843 11:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:19.843 11:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:19.843 11:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:19.844 11:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:19.844 11:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:19.844 11:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:19.844 11:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:19.844 11:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:20.102 11:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:20.102 11:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:20.102 11:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:20.102 11:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:20.102 11:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:20.102 11:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:20.102 11:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:20.102 11:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:20.102 11:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:20.102 11:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:18:20.360 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:18:20.360 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:18:20.360 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:18:20.360 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:20.360 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:20.360 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:18:20.360 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:20.360 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:20.360 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:20.360 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:18:20.927 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:18:20.927 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:18:20.927 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:18:20.927 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:20.927 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:20.927 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:18:20.927 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:20.927 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:20.927 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:20.928 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:18:20.928 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:18:20.928 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:18:20.928 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:18:20.928 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:20.928 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:20.928 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:18:20.928 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:20.928 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:20.928 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:20.928 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:18:21.186 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:18:21.186 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:18:21.186 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:18:21.186 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:21.186 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:21.186 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:18:21.186 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:21.186 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:21.186 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:21.186 11:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:18:21.754 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:18:21.754 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:18:21.754 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:18:21.754 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:21.754 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:21.754 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:18:21.754 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:21.754 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:21.754 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:21.754 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:21.754 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:21.754 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:21.754 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:21.754 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:22.013 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:22.014 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:22.014 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:22.014 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:22.014 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:22.014 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:22.014 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:18:22.014 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:18:22.014 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:18:22.014 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:18:22.014 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:22.014 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:18:22.014 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:22.014 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:18:22.014 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:22.014 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:18:22.014 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:22.014 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:18:22.014 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:22.014 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:18:22.014 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:22.014 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:18:22.014 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:22.014 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:18:22.014 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:18:22.273 /dev/nbd0 00:18:22.273 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:22.273 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:22.273 11:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:22.273 11:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:22.273 11:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:22.273 11:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:22.273 11:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:22.273 11:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:22.273 11:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:22.273 11:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:22.273 11:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:22.273 1+0 records in 00:18:22.273 1+0 records out 00:18:22.273 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000688069 s, 6.0 MB/s 00:18:22.273 11:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:22.273 11:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:22.273 11:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:22.273 11:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:22.273 11:32:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:22.273 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:22.273 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:18:22.273 11:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:18:22.533 /dev/nbd1 00:18:22.533 11:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:22.533 11:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:22.533 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:22.533 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:22.533 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:22.533 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:22.533 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:22.533 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:22.533 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:22.533 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:22.533 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:22.533 1+0 records in 00:18:22.533 1+0 records out 00:18:22.533 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000871754 s, 4.7 MB/s 00:18:22.533 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:22.533 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:22.533 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:22.533 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:22.533 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:22.533 11:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:22.533 11:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:18:22.533 11:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:18:22.793 /dev/nbd10 00:18:22.793 11:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:18:22.793 11:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:18:22.793 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:18:22.793 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:22.793 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:22.793 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:22.793 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:18:22.793 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:22.793 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:22.793 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:22.793 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:22.793 1+0 records in 00:18:22.793 1+0 records out 00:18:22.793 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000732679 s, 5.6 MB/s 00:18:22.793 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:22.793 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:22.793 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:22.793 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:22.793 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:22.793 11:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:22.793 11:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:18:22.793 11:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:18:23.052 /dev/nbd11 00:18:23.052 11:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:18:23.052 11:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:18:23.052 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:18:23.052 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:23.052 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:23.052 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:23.052 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:18:23.312 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:23.312 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:23.312 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:23.312 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:23.312 1+0 records in 00:18:23.312 1+0 records out 00:18:23.312 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000719311 s, 5.7 MB/s 00:18:23.312 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:23.312 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:23.312 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:23.312 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:23.312 11:32:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:23.312 11:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:23.312 11:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:18:23.312 11:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:18:24.051 /dev/nbd12 00:18:24.051 11:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:18:24.051 11:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:18:24.051 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:18:24.051 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:24.051 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:24.051 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:24.052 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:18:24.052 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:24.052 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:24.052 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:24.052 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:24.052 1+0 records in 00:18:24.052 1+0 records out 00:18:24.052 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000848374 s, 4.8 MB/s 00:18:24.052 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:24.052 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:24.052 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:24.052 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:24.052 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:24.052 11:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:24.052 11:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:18:24.052 11:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:18:24.052 /dev/nbd13 00:18:24.052 11:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:18:24.052 11:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:18:24.052 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:18:24.052 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:24.052 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:24.052 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:24.052 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:18:24.052 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:24.052 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:24.052 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:24.052 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:24.052 1+0 records in 00:18:24.052 1+0 records out 00:18:24.052 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000768811 s, 5.3 MB/s 00:18:24.052 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:24.052 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:24.052 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:24.052 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:24.052 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:24.052 11:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:24.052 11:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:18:24.052 11:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:18:24.464 /dev/nbd14 00:18:24.464 11:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:18:24.464 11:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:18:24.464 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:18:24.464 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:24.464 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:24.464 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:24.464 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:18:24.464 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:24.464 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:24.464 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:24.464 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:24.464 1+0 records in 00:18:24.464 1+0 records out 00:18:24.464 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000714441 s, 5.7 MB/s 00:18:24.464 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:24.464 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:24.464 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:24.464 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:24.464 11:32:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:24.464 11:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:24.464 11:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:18:24.464 11:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:24.464 11:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:24.464 11:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:24.464 11:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:24.464 { 00:18:24.464 "nbd_device": "/dev/nbd0", 00:18:24.464 "bdev_name": "Nvme0n1" 00:18:24.464 }, 00:18:24.464 { 00:18:24.464 "nbd_device": "/dev/nbd1", 00:18:24.464 "bdev_name": "Nvme1n1p1" 00:18:24.464 }, 00:18:24.464 { 00:18:24.464 "nbd_device": "/dev/nbd10", 00:18:24.464 "bdev_name": "Nvme1n1p2" 00:18:24.464 }, 00:18:24.464 { 00:18:24.464 "nbd_device": "/dev/nbd11", 00:18:24.464 "bdev_name": "Nvme2n1" 00:18:24.464 }, 00:18:24.464 { 00:18:24.464 "nbd_device": "/dev/nbd12", 00:18:24.464 "bdev_name": "Nvme2n2" 00:18:24.464 }, 00:18:24.464 { 00:18:24.464 "nbd_device": "/dev/nbd13", 00:18:24.464 "bdev_name": "Nvme2n3" 00:18:24.464 }, 00:18:24.464 { 00:18:24.464 "nbd_device": "/dev/nbd14", 00:18:24.464 "bdev_name": "Nvme3n1" 00:18:24.464 } 00:18:24.464 ]' 00:18:24.464 11:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:24.464 { 00:18:24.464 "nbd_device": "/dev/nbd0", 00:18:24.464 "bdev_name": "Nvme0n1" 00:18:24.464 }, 00:18:24.464 { 00:18:24.464 "nbd_device": "/dev/nbd1", 00:18:24.464 "bdev_name": "Nvme1n1p1" 00:18:24.464 }, 00:18:24.464 { 00:18:24.464 "nbd_device": "/dev/nbd10", 00:18:24.464 "bdev_name": "Nvme1n1p2" 00:18:24.464 }, 00:18:24.464 { 00:18:24.464 "nbd_device": "/dev/nbd11", 00:18:24.464 "bdev_name": "Nvme2n1" 00:18:24.464 }, 00:18:24.464 { 00:18:24.464 "nbd_device": "/dev/nbd12", 00:18:24.464 "bdev_name": "Nvme2n2" 00:18:24.464 }, 00:18:24.464 { 00:18:24.464 "nbd_device": "/dev/nbd13", 00:18:24.464 "bdev_name": "Nvme2n3" 00:18:24.464 }, 00:18:24.464 { 00:18:24.464 "nbd_device": "/dev/nbd14", 00:18:24.464 "bdev_name": "Nvme3n1" 00:18:24.464 } 00:18:24.464 ]' 00:18:24.464 11:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:24.464 11:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:18:24.464 /dev/nbd1 00:18:24.464 /dev/nbd10 00:18:24.464 /dev/nbd11 00:18:24.464 /dev/nbd12 00:18:24.464 /dev/nbd13 00:18:24.464 /dev/nbd14' 00:18:24.464 11:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:18:24.464 /dev/nbd1 00:18:24.464 /dev/nbd10 00:18:24.464 /dev/nbd11 00:18:24.464 /dev/nbd12 00:18:24.464 /dev/nbd13 00:18:24.464 /dev/nbd14' 00:18:24.464 11:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:24.465 11:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:18:24.465 11:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:18:24.465 11:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:18:24.465 11:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:18:24.465 11:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:18:24.465 11:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:18:24.465 11:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:24.465 11:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:24.465 11:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:24.465 11:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:24.465 11:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:18:24.465 256+0 records in 00:18:24.465 256+0 records out 00:18:24.465 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00732539 s, 143 MB/s 00:18:24.465 11:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:24.465 11:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:24.724 256+0 records in 00:18:24.724 256+0 records out 00:18:24.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168864 s, 6.2 MB/s 00:18:24.724 11:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:24.724 11:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:18:24.983 256+0 records in 00:18:24.983 256+0 records out 00:18:24.983 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.189532 s, 5.5 MB/s 00:18:24.983 11:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:24.983 11:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:18:24.983 256+0 records in 00:18:24.983 256+0 records out 00:18:24.983 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.189499 s, 5.5 MB/s 00:18:24.983 11:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:24.983 11:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:18:25.242 256+0 records in 00:18:25.242 256+0 records out 00:18:25.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.179508 s, 5.8 MB/s 00:18:25.242 11:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:25.242 11:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:18:25.501 256+0 records in 00:18:25.501 256+0 records out 00:18:25.501 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.174894 s, 6.0 MB/s 00:18:25.501 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:25.501 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:18:25.501 256+0 records in 00:18:25.501 256+0 records out 00:18:25.501 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.169954 s, 6.2 MB/s 00:18:25.501 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:25.501 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:18:25.761 256+0 records in 00:18:25.761 256+0 records out 00:18:25.761 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.174489 s, 6.0 MB/s 00:18:25.761 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:18:25.761 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:18:25.761 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:25.761 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:25.761 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:25.761 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:25.761 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:25.761 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:25.761 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:18:25.761 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:25.761 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:18:25.761 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:25.761 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:18:25.761 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:25.761 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:18:25.761 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:25.761 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:18:25.761 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:25.761 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:18:25.761 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:25.761 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:18:25.761 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:25.761 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:18:25.761 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:25.761 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:18:25.761 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:25.761 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:25.761 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:25.761 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:26.329 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:26.329 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:26.329 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:26.329 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:26.329 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:26.329 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:26.329 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:26.329 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:26.329 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:26.329 11:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:26.586 11:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:26.586 11:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:26.586 11:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:26.586 11:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:26.586 11:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:26.587 11:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:26.587 11:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:26.587 11:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:26.587 11:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:26.587 11:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:18:26.844 11:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:18:26.844 11:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:18:26.844 11:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:18:26.844 11:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:26.844 11:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:26.844 11:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:18:26.844 11:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:26.844 11:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:26.844 11:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:26.844 11:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:18:27.101 11:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:18:27.101 11:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:18:27.101 11:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:18:27.101 11:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:27.101 11:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:27.101 11:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:18:27.101 11:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:27.101 11:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:27.101 11:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:27.101 11:32:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:18:27.359 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:18:27.359 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:18:27.359 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:18:27.359 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:27.359 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:27.359 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:18:27.359 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:27.359 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:27.359 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:27.359 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:18:27.617 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:18:27.617 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:18:27.617 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:18:27.617 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:27.617 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:27.617 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:18:27.617 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:27.617 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:27.617 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:27.617 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:18:27.874 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:18:27.874 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:18:27.874 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:18:27.874 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:27.874 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:27.874 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:18:27.874 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:27.874 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:27.874 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:27.874 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:27.874 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:28.132 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:28.132 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:28.132 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:28.389 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:28.389 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:28.389 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:28.389 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:28.389 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:28.389 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:28.389 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:18:28.389 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:28.389 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:18:28.389 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:28.389 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:28.389 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:18:28.389 11:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:18:28.658 malloc_lvol_verify 00:18:28.658 11:32:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:18:28.930 67723b1b-c1ff-4b89-956c-abddf28f3da3 00:18:28.930 11:32:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:18:29.188 b87d99c3-9eb4-4cc3-95ce-315adc7528e3 00:18:29.188 11:32:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:18:29.446 /dev/nbd0 00:18:29.446 11:32:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:18:29.446 11:32:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:18:29.446 11:32:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:18:29.446 11:32:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:18:29.446 11:32:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:18:29.446 mke2fs 1.47.0 (5-Feb-2023) 00:18:29.446 Discarding device blocks: 0/4096 done 00:18:29.446 Creating filesystem with 4096 1k blocks and 1024 inodes 00:18:29.446 00:18:29.446 Allocating group tables: 0/1 done 00:18:29.446 Writing inode tables: 0/1 done 00:18:29.446 Creating journal (1024 blocks): done 00:18:29.446 Writing superblocks and filesystem accounting information: 0/1 done 00:18:29.446 00:18:29.446 11:32:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:29.446 11:32:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:29.446 11:32:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:29.446 11:32:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:29.446 11:32:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:29.446 11:32:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:29.446 11:32:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:30.013 11:32:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:30.013 11:32:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:30.013 11:32:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:30.013 11:32:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:30.013 11:32:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:30.013 11:32:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:30.013 11:32:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:30.013 11:32:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:30.013 11:32:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62885 00:18:30.013 11:32:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 62885 ']' 00:18:30.013 11:32:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 62885 00:18:30.013 11:32:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:18:30.013 11:32:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:30.013 11:32:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62885 00:18:30.013 killing process with pid 62885 00:18:30.013 11:32:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:30.013 11:32:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:30.013 11:32:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62885' 00:18:30.013 11:32:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 62885 00:18:30.013 11:32:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 62885 00:18:30.949 11:32:36 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:18:30.949 00:18:30.949 real 0m15.435s 00:18:30.949 user 0m22.171s 00:18:30.949 sys 0m4.879s 00:18:30.949 11:32:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:30.949 11:32:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:30.949 ************************************ 00:18:30.949 END TEST bdev_nbd 00:18:30.949 ************************************ 00:18:31.207 11:32:36 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:18:31.207 11:32:36 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:18:31.207 11:32:36 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:18:31.207 11:32:36 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:18:31.207 skipping fio tests on NVMe due to multi-ns failures. 00:18:31.207 11:32:36 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:31.207 11:32:36 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:31.207 11:32:36 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:18:31.207 11:32:36 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:31.207 11:32:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:31.207 ************************************ 00:18:31.207 START TEST bdev_verify 00:18:31.207 ************************************ 00:18:31.207 11:32:36 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:31.207 [2024-11-20 11:32:36.851625] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:18:31.207 [2024-11-20 11:32:36.851825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63344 ] 00:18:31.467 [2024-11-20 11:32:37.049005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:31.467 [2024-11-20 11:32:37.212644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.467 [2024-11-20 11:32:37.212653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.404 Running I/O for 5 seconds... 00:18:34.716 19072.00 IOPS, 74.50 MiB/s [2024-11-20T11:32:41.419Z] 18464.00 IOPS, 72.12 MiB/s [2024-11-20T11:32:42.355Z] 18560.00 IOPS, 72.50 MiB/s [2024-11-20T11:32:43.292Z] 18288.00 IOPS, 71.44 MiB/s [2024-11-20T11:32:43.292Z] 18444.80 IOPS, 72.05 MiB/s 00:18:37.526 Latency(us) 00:18:37.526 [2024-11-20T11:32:43.292Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.526 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:37.526 Verification LBA range: start 0x0 length 0xbd0bd 00:18:37.526 Nvme0n1 : 5.06 1316.42 5.14 0.00 0.00 96815.96 22639.71 93418.59 00:18:37.526 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:37.526 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:18:37.526 Nvme0n1 : 5.05 1266.86 4.95 0.00 0.00 100629.37 22043.93 92941.96 00:18:37.526 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:37.526 Verification LBA range: start 0x0 length 0x4ff80 00:18:37.526 Nvme1n1p1 : 5.06 1316.00 5.14 0.00 0.00 96604.87 23235.49 84362.71 00:18:37.526 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:37.526 Verification LBA range: start 0x4ff80 length 0x4ff80 00:18:37.526 Nvme1n1p1 : 5.05 1266.45 4.95 0.00 0.00 100484.07 24546.21 90558.84 00:18:37.526 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:37.526 Verification LBA range: start 0x0 length 0x4ff7f 00:18:37.526 Nvme1n1p2 : 5.08 1321.81 5.16 0.00 0.00 96002.99 8043.05 78166.57 00:18:37.526 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:37.526 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:18:37.526 Nvme1n1p2 : 5.08 1273.12 4.97 0.00 0.00 99821.00 9592.09 86745.83 00:18:37.526 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:37.526 Verification LBA range: start 0x0 length 0x80000 00:18:37.526 Nvme2n1 : 5.09 1321.39 5.16 0.00 0.00 95831.14 8817.57 75306.82 00:18:37.526 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:37.526 Verification LBA range: start 0x80000 length 0x80000 00:18:37.526 Nvme2n1 : 5.08 1272.70 4.97 0.00 0.00 99671.06 10009.13 83886.08 00:18:37.526 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:37.526 Verification LBA range: start 0x0 length 0x80000 00:18:37.526 Nvme2n2 : 5.10 1330.88 5.20 0.00 0.00 95156.03 9175.04 78643.20 00:18:37.526 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:37.527 Verification LBA range: start 0x80000 length 0x80000 00:18:37.527 Nvme2n2 : 5.08 1272.28 4.97 0.00 0.00 99513.22 9830.40 88175.71 00:18:37.527 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:37.527 Verification LBA range: start 0x0 length 0x80000 00:18:37.527 Nvme2n3 : 5.10 1330.51 5.20 0.00 0.00 94989.69 9115.46 81502.95 00:18:37.527 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:37.527 Verification LBA range: start 0x80000 length 0x80000 00:18:37.527 Nvme2n3 : 5.09 1281.93 5.01 0.00 0.00 98767.80 9055.88 92465.34 00:18:37.527 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:37.527 Verification LBA range: start 0x0 length 0x20000 00:18:37.527 Nvme3n1 : 5.10 1330.12 5.20 0.00 0.00 94833.65 9175.04 84839.33 00:18:37.527 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:37.527 Verification LBA range: start 0x20000 length 0x20000 00:18:37.527 Nvme3n1 : 5.09 1281.60 5.01 0.00 0.00 98564.02 8936.73 93895.21 00:18:37.527 [2024-11-20T11:32:43.293Z] =================================================================================================================== 00:18:37.527 [2024-11-20T11:32:43.293Z] Total : 18182.08 71.02 0.00 0.00 97647.18 8043.05 93895.21 00:18:38.937 00:18:38.937 real 0m7.748s 00:18:38.937 user 0m14.123s 00:18:38.937 sys 0m0.374s 00:18:38.937 11:32:44 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:38.937 ************************************ 00:18:38.937 END TEST bdev_verify 00:18:38.937 ************************************ 00:18:38.937 11:32:44 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:18:38.937 11:32:44 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:38.937 11:32:44 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:18:38.937 11:32:44 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:38.937 11:32:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:38.937 ************************************ 00:18:38.937 START TEST bdev_verify_big_io 00:18:38.937 ************************************ 00:18:38.937 11:32:44 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:38.937 [2024-11-20 11:32:44.661841] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:18:38.937 [2024-11-20 11:32:44.662037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63453 ] 00:18:39.197 [2024-11-20 11:32:44.852003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:39.455 [2024-11-20 11:32:44.991442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.455 [2024-11-20 11:32:44.991454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:40.389 Running I/O for 5 seconds... 00:18:42.857 0.00 IOPS, 0.00 MiB/s [2024-11-20T11:32:51.905Z] 956.50 IOPS, 59.78 MiB/s [2024-11-20T11:32:52.164Z] 1747.67 IOPS, 109.23 MiB/s [2024-11-20T11:32:52.164Z] 2399.75 IOPS, 149.98 MiB/s 00:18:46.398 Latency(us) 00:18:46.398 [2024-11-20T11:32:52.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.398 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:46.398 Verification LBA range: start 0x0 length 0xbd0b 00:18:46.398 Nvme0n1 : 5.83 106.20 6.64 0.00 0.00 1138972.46 21567.30 1204909.15 00:18:46.398 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:46.398 Verification LBA range: start 0xbd0b length 0xbd0b 00:18:46.398 Nvme0n1 : 5.87 109.88 6.87 0.00 0.00 1118900.98 34078.72 1204909.15 00:18:46.398 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:46.398 Verification LBA range: start 0x0 length 0x4ff8 00:18:46.398 Nvme1n1p1 : 5.72 111.91 6.99 0.00 0.00 1061642.24 91988.71 1029510.98 00:18:46.398 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:46.398 Verification LBA range: start 0x4ff8 length 0x4ff8 00:18:46.398 Nvme1n1p1 : 5.80 110.31 6.89 0.00 0.00 1094208.79 104857.60 1037136.99 00:18:46.398 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:46.398 Verification LBA range: start 0x0 length 0x4ff7 00:18:46.398 Nvme1n1p2 : 5.88 119.70 7.48 0.00 0.00 974350.39 46709.29 880803.84 00:18:46.398 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:46.398 Verification LBA range: start 0x4ff7 length 0x4ff7 00:18:46.398 Nvme1n1p2 : 5.87 113.71 7.11 0.00 0.00 1033231.23 69110.69 1105771.05 00:18:46.398 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:46.398 Verification LBA range: start 0x0 length 0x8000 00:18:46.398 Nvme2n1 : 5.88 119.64 7.48 0.00 0.00 944479.71 47900.86 896055.85 00:18:46.398 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:46.398 Verification LBA range: start 0x8000 length 0x8000 00:18:46.398 Nvme2n1 : 5.88 113.83 7.11 0.00 0.00 997234.39 69587.32 1113397.06 00:18:46.398 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:46.398 Verification LBA range: start 0x0 length 0x8000 00:18:46.398 Nvme2n2 : 6.00 122.80 7.68 0.00 0.00 887746.92 67204.19 1075267.03 00:18:46.398 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:46.398 Verification LBA range: start 0x8000 length 0x8000 00:18:46.398 Nvme2n2 : 5.95 117.94 7.37 0.00 0.00 933551.41 70540.57 1128649.08 00:18:46.398 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:46.398 Verification LBA range: start 0x0 length 0x8000 00:18:46.398 Nvme2n3 : 6.03 130.92 8.18 0.00 0.00 815071.01 29312.47 934185.89 00:18:46.398 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:46.398 Verification LBA range: start 0x8000 length 0x8000 00:18:46.398 Nvme2n3 : 6.03 127.38 7.96 0.00 0.00 843000.48 29908.25 1143901.09 00:18:46.398 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:46.398 Verification LBA range: start 0x0 length 0x2000 00:18:46.398 Nvme3n1 : 6.10 143.71 8.98 0.00 0.00 725038.41 1117.09 1853119.77 00:18:46.398 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:46.398 Verification LBA range: start 0x2000 length 0x2000 00:18:46.398 Nvme3n1 : 6.07 133.74 8.36 0.00 0.00 782671.76 2219.29 1860745.77 00:18:46.398 [2024-11-20T11:32:52.164Z] =================================================================================================================== 00:18:46.398 [2024-11-20T11:32:52.164Z] Total : 1681.67 105.10 0.00 0.00 941053.19 1117.09 1860745.77 00:18:48.342 00:18:48.342 real 0m9.214s 00:18:48.342 user 0m17.065s 00:18:48.342 sys 0m0.422s 00:18:48.342 11:32:53 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:48.342 11:32:53 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:18:48.342 ************************************ 00:18:48.342 END TEST bdev_verify_big_io 00:18:48.342 ************************************ 00:18:48.342 11:32:53 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:48.342 11:32:53 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:48.342 11:32:53 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:48.342 11:32:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:48.342 ************************************ 00:18:48.342 START TEST bdev_write_zeroes 00:18:48.342 ************************************ 00:18:48.342 11:32:53 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:48.342 [2024-11-20 11:32:53.915557] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:18:48.342 [2024-11-20 11:32:53.915763] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63568 ] 00:18:48.342 [2024-11-20 11:32:54.093297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.600 [2024-11-20 11:32:54.226285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.166 Running I/O for 1 seconds... 00:18:50.542 51008.00 IOPS, 199.25 MiB/s 00:18:50.542 Latency(us) 00:18:50.542 [2024-11-20T11:32:56.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.542 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:50.542 Nvme0n1 : 1.03 7265.81 28.38 0.00 0.00 17565.72 14179.61 34793.66 00:18:50.542 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:50.542 Nvme1n1p1 : 1.03 7253.71 28.33 0.00 0.00 17562.84 14358.34 34078.72 00:18:50.542 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:50.542 Nvme1n1p2 : 1.03 7241.79 28.29 0.00 0.00 17527.97 14358.34 33125.47 00:18:50.542 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:50.542 Nvme2n1 : 1.04 7230.88 28.25 0.00 0.00 17458.75 12928.47 32172.22 00:18:50.542 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:50.542 Nvme2n2 : 1.04 7219.70 28.20 0.00 0.00 17428.92 11319.85 31457.28 00:18:50.542 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:50.542 Nvme2n3 : 1.04 7209.02 28.16 0.00 0.00 17402.98 10366.60 32648.84 00:18:50.542 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:50.542 Nvme3n1 : 1.04 7136.73 27.88 0.00 0.00 17534.17 14120.03 35031.97 00:18:50.542 [2024-11-20T11:32:56.308Z] =================================================================================================================== 00:18:50.542 [2024-11-20T11:32:56.308Z] Total : 50557.65 197.49 0.00 0.00 17497.29 10366.60 35031.97 00:18:51.479 00:18:51.479 real 0m3.340s 00:18:51.479 user 0m2.912s 00:18:51.479 sys 0m0.304s 00:18:51.479 11:32:57 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:51.479 11:32:57 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:18:51.479 ************************************ 00:18:51.479 END TEST bdev_write_zeroes 00:18:51.479 ************************************ 00:18:51.479 11:32:57 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:51.479 11:32:57 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:51.479 11:32:57 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:51.479 11:32:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:51.479 ************************************ 00:18:51.479 START TEST bdev_json_nonenclosed 00:18:51.479 ************************************ 00:18:51.479 11:32:57 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:51.741 [2024-11-20 11:32:57.296229] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:18:51.741 [2024-11-20 11:32:57.296404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63621 ] 00:18:51.741 [2024-11-20 11:32:57.472788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.000 [2024-11-20 11:32:57.602911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.000 [2024-11-20 11:32:57.603094] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:18:52.000 [2024-11-20 11:32:57.603123] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:52.000 [2024-11-20 11:32:57.603137] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:52.258 00:18:52.258 real 0m0.661s 00:18:52.258 user 0m0.432s 00:18:52.258 sys 0m0.123s 00:18:52.258 11:32:57 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:52.259 11:32:57 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:18:52.259 ************************************ 00:18:52.259 END TEST bdev_json_nonenclosed 00:18:52.259 ************************************ 00:18:52.259 11:32:57 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:52.259 11:32:57 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:52.259 11:32:57 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:52.259 11:32:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:52.259 ************************************ 00:18:52.259 START TEST bdev_json_nonarray 00:18:52.259 ************************************ 00:18:52.259 11:32:57 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:52.518 [2024-11-20 11:32:58.025136] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:18:52.518 [2024-11-20 11:32:58.025334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63652 ] 00:18:52.518 [2024-11-20 11:32:58.212203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.777 [2024-11-20 11:32:58.343782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.777 [2024-11-20 11:32:58.343903] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:18:52.777 [2024-11-20 11:32:58.343933] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:52.777 [2024-11-20 11:32:58.343948] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:53.036 00:18:53.036 real 0m0.695s 00:18:53.036 user 0m0.451s 00:18:53.036 sys 0m0.138s 00:18:53.036 11:32:58 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:53.036 11:32:58 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:18:53.036 ************************************ 00:18:53.036 END TEST bdev_json_nonarray 00:18:53.036 ************************************ 00:18:53.036 11:32:58 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:18:53.036 11:32:58 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:18:53.036 11:32:58 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:18:53.036 11:32:58 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:53.036 11:32:58 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:53.036 11:32:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:53.036 ************************************ 00:18:53.036 START TEST bdev_gpt_uuid 00:18:53.036 ************************************ 00:18:53.036 11:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:18:53.036 11:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:18:53.036 11:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:18:53.036 11:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63677 00:18:53.036 11:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:53.036 11:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:53.036 11:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63677 00:18:53.036 11:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63677 ']' 00:18:53.036 11:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.036 11:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.036 11:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.036 11:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.036 11:32:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:18:53.294 [2024-11-20 11:32:58.801076] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:18:53.294 [2024-11-20 11:32:58.801263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63677 ] 00:18:53.294 [2024-11-20 11:32:58.989372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.552 [2024-11-20 11:32:59.123106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.486 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:54.486 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:18:54.486 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:54.486 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.486 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:18:54.744 Some configs were skipped because the RPC state that can call them passed over. 00:18:54.744 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.744 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:18:54.744 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.744 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:18:54.744 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.744 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:18:54.744 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.744 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:18:54.744 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.744 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:18:54.744 { 00:18:54.744 "name": "Nvme1n1p1", 00:18:54.744 "aliases": [ 00:18:54.744 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:18:54.745 ], 00:18:54.745 "product_name": "GPT Disk", 00:18:54.745 "block_size": 4096, 00:18:54.745 "num_blocks": 655104, 00:18:54.745 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:18:54.745 "assigned_rate_limits": { 00:18:54.745 "rw_ios_per_sec": 0, 00:18:54.745 "rw_mbytes_per_sec": 0, 00:18:54.745 "r_mbytes_per_sec": 0, 00:18:54.745 "w_mbytes_per_sec": 0 00:18:54.745 }, 00:18:54.745 "claimed": false, 00:18:54.745 "zoned": false, 00:18:54.745 "supported_io_types": { 00:18:54.745 "read": true, 00:18:54.745 "write": true, 00:18:54.745 "unmap": true, 00:18:54.745 "flush": true, 00:18:54.745 "reset": true, 00:18:54.745 "nvme_admin": false, 00:18:54.745 "nvme_io": false, 00:18:54.745 "nvme_io_md": false, 00:18:54.745 "write_zeroes": true, 00:18:54.745 "zcopy": false, 00:18:54.745 "get_zone_info": false, 00:18:54.745 "zone_management": false, 00:18:54.745 "zone_append": false, 00:18:54.745 "compare": true, 00:18:54.745 "compare_and_write": false, 00:18:54.745 "abort": true, 00:18:54.745 "seek_hole": false, 00:18:54.745 "seek_data": false, 00:18:54.745 "copy": true, 00:18:54.745 "nvme_iov_md": false 00:18:54.745 }, 00:18:54.745 "driver_specific": { 00:18:54.745 "gpt": { 00:18:54.745 "base_bdev": "Nvme1n1", 00:18:54.745 "offset_blocks": 256, 00:18:54.745 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:18:54.745 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:18:54.745 "partition_name": "SPDK_TEST_first" 00:18:54.745 } 00:18:54.745 } 00:18:54.745 } 00:18:54.745 ]' 00:18:54.745 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:18:54.745 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:18:54.745 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:18:54.745 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:18:54.745 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:18:55.003 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:18:55.004 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:18:55.004 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.004 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:18:55.004 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.004 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:18:55.004 { 00:18:55.004 "name": "Nvme1n1p2", 00:18:55.004 "aliases": [ 00:18:55.004 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:18:55.004 ], 00:18:55.004 "product_name": "GPT Disk", 00:18:55.004 "block_size": 4096, 00:18:55.004 "num_blocks": 655103, 00:18:55.004 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:18:55.004 "assigned_rate_limits": { 00:18:55.004 "rw_ios_per_sec": 0, 00:18:55.004 "rw_mbytes_per_sec": 0, 00:18:55.004 "r_mbytes_per_sec": 0, 00:18:55.004 "w_mbytes_per_sec": 0 00:18:55.004 }, 00:18:55.004 "claimed": false, 00:18:55.004 "zoned": false, 00:18:55.004 "supported_io_types": { 00:18:55.004 "read": true, 00:18:55.004 "write": true, 00:18:55.004 "unmap": true, 00:18:55.004 "flush": true, 00:18:55.004 "reset": true, 00:18:55.004 "nvme_admin": false, 00:18:55.004 "nvme_io": false, 00:18:55.004 "nvme_io_md": false, 00:18:55.004 "write_zeroes": true, 00:18:55.004 "zcopy": false, 00:18:55.004 "get_zone_info": false, 00:18:55.004 "zone_management": false, 00:18:55.004 "zone_append": false, 00:18:55.004 "compare": true, 00:18:55.004 "compare_and_write": false, 00:18:55.004 "abort": true, 00:18:55.004 "seek_hole": false, 00:18:55.004 "seek_data": false, 00:18:55.004 "copy": true, 00:18:55.004 "nvme_iov_md": false 00:18:55.004 }, 00:18:55.004 "driver_specific": { 00:18:55.004 "gpt": { 00:18:55.004 "base_bdev": "Nvme1n1", 00:18:55.004 "offset_blocks": 655360, 00:18:55.004 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:18:55.004 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:18:55.004 "partition_name": "SPDK_TEST_second" 00:18:55.004 } 00:18:55.004 } 00:18:55.004 } 00:18:55.004 ]' 00:18:55.004 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:18:55.004 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:18:55.004 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:18:55.004 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:18:55.004 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:18:55.004 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:18:55.004 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 63677 00:18:55.004 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63677 ']' 00:18:55.004 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63677 00:18:55.004 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:18:55.004 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:55.004 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63677 00:18:55.004 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:55.004 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:55.004 killing process with pid 63677 00:18:55.004 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63677' 00:18:55.004 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63677 00:18:55.004 11:33:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63677 00:18:57.538 00:18:57.538 real 0m4.343s 00:18:57.538 user 0m4.569s 00:18:57.538 sys 0m0.579s 00:18:57.538 11:33:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:57.538 ************************************ 00:18:57.538 END TEST bdev_gpt_uuid 00:18:57.538 ************************************ 00:18:57.538 11:33:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:18:57.538 11:33:03 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:18:57.538 11:33:03 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:18:57.538 11:33:03 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:18:57.538 11:33:03 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:18:57.538 11:33:03 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:57.538 11:33:03 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:18:57.538 11:33:03 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:18:57.538 11:33:03 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:18:57.538 11:33:03 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:57.796 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:58.056 Waiting for block devices as requested 00:18:58.056 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:58.056 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:58.315 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:18:58.315 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:19:03.599 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:19:03.599 11:33:09 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:19:03.599 11:33:09 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:19:03.599 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:19:03.599 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:19:03.599 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:19:03.599 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:19:03.599 11:33:09 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:19:03.599 00:19:03.599 real 1m6.759s 00:19:03.599 user 1m25.709s 00:19:03.599 sys 0m10.829s 00:19:03.599 ************************************ 00:19:03.599 END TEST blockdev_nvme_gpt 00:19:03.599 ************************************ 00:19:03.599 11:33:09 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:03.599 11:33:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:19:03.599 11:33:09 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:19:03.599 11:33:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:03.599 11:33:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:03.599 11:33:09 -- common/autotest_common.sh@10 -- # set +x 00:19:03.599 ************************************ 00:19:03.599 START TEST nvme 00:19:03.599 ************************************ 00:19:03.599 11:33:09 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:19:03.857 * Looking for test storage... 00:19:03.857 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:19:03.857 11:33:09 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:03.857 11:33:09 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:03.857 11:33:09 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:19:03.857 11:33:09 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:03.857 11:33:09 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:03.857 11:33:09 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:03.857 11:33:09 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:03.857 11:33:09 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:19:03.857 11:33:09 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:19:03.857 11:33:09 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:19:03.857 11:33:09 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:19:03.857 11:33:09 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:19:03.857 11:33:09 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:19:03.857 11:33:09 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:19:03.857 11:33:09 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:03.857 11:33:09 nvme -- scripts/common.sh@344 -- # case "$op" in 00:19:03.857 11:33:09 nvme -- scripts/common.sh@345 -- # : 1 00:19:03.857 11:33:09 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:03.857 11:33:09 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:03.857 11:33:09 nvme -- scripts/common.sh@365 -- # decimal 1 00:19:03.857 11:33:09 nvme -- scripts/common.sh@353 -- # local d=1 00:19:03.857 11:33:09 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:03.857 11:33:09 nvme -- scripts/common.sh@355 -- # echo 1 00:19:03.857 11:33:09 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:19:03.857 11:33:09 nvme -- scripts/common.sh@366 -- # decimal 2 00:19:03.857 11:33:09 nvme -- scripts/common.sh@353 -- # local d=2 00:19:03.857 11:33:09 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:03.857 11:33:09 nvme -- scripts/common.sh@355 -- # echo 2 00:19:03.857 11:33:09 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:19:03.857 11:33:09 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:03.857 11:33:09 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:03.857 11:33:09 nvme -- scripts/common.sh@368 -- # return 0 00:19:03.857 11:33:09 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:03.857 11:33:09 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:03.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.857 --rc genhtml_branch_coverage=1 00:19:03.857 --rc genhtml_function_coverage=1 00:19:03.857 --rc genhtml_legend=1 00:19:03.857 --rc geninfo_all_blocks=1 00:19:03.857 --rc geninfo_unexecuted_blocks=1 00:19:03.857 00:19:03.857 ' 00:19:03.857 11:33:09 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:03.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.857 --rc genhtml_branch_coverage=1 00:19:03.857 --rc genhtml_function_coverage=1 00:19:03.857 --rc genhtml_legend=1 00:19:03.857 --rc geninfo_all_blocks=1 00:19:03.857 --rc geninfo_unexecuted_blocks=1 00:19:03.857 00:19:03.857 ' 00:19:03.857 11:33:09 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:03.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.857 --rc genhtml_branch_coverage=1 00:19:03.857 --rc genhtml_function_coverage=1 00:19:03.857 --rc genhtml_legend=1 00:19:03.857 --rc geninfo_all_blocks=1 00:19:03.857 --rc geninfo_unexecuted_blocks=1 00:19:03.857 00:19:03.857 ' 00:19:03.857 11:33:09 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:03.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.857 --rc genhtml_branch_coverage=1 00:19:03.857 --rc genhtml_function_coverage=1 00:19:03.857 --rc genhtml_legend=1 00:19:03.857 --rc geninfo_all_blocks=1 00:19:03.857 --rc geninfo_unexecuted_blocks=1 00:19:03.857 00:19:03.857 ' 00:19:03.857 11:33:09 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:04.424 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:04.990 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:04.990 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:19:04.990 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:04.990 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:19:05.248 11:33:10 nvme -- nvme/nvme.sh@79 -- # uname 00:19:05.248 Waiting for stub to ready for secondary processes... 00:19:05.248 11:33:10 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:19:05.248 11:33:10 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:19:05.248 11:33:10 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:19:05.248 11:33:10 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:19:05.248 11:33:10 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:19:05.248 11:33:10 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:19:05.248 11:33:10 nvme -- common/autotest_common.sh@1075 -- # stubpid=64332 00:19:05.248 11:33:10 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:19:05.248 11:33:10 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:19:05.248 11:33:10 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:19:05.248 11:33:10 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64332 ]] 00:19:05.248 11:33:10 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:19:05.248 [2024-11-20 11:33:10.887378] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:19:05.248 [2024-11-20 11:33:10.887596] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:19:06.183 11:33:11 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:19:06.183 11:33:11 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64332 ]] 00:19:06.183 11:33:11 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:19:06.750 [2024-11-20 11:33:12.214936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:06.750 [2024-11-20 11:33:12.361223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:06.750 [2024-11-20 11:33:12.361376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:06.750 [2024-11-20 11:33:12.361559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.750 [2024-11-20 11:33:12.380598] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:19:06.750 [2024-11-20 11:33:12.380652] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:19:06.750 [2024-11-20 11:33:12.391003] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:19:06.750 [2024-11-20 11:33:12.391128] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:19:06.750 [2024-11-20 11:33:12.399358] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:19:06.750 [2024-11-20 11:33:12.401033] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:19:06.750 [2024-11-20 11:33:12.401210] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:19:06.750 [2024-11-20 11:33:12.411566] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:19:06.750 [2024-11-20 11:33:12.411794] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:19:06.750 [2024-11-20 11:33:12.412297] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:19:06.750 [2024-11-20 11:33:12.418668] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:19:06.750 [2024-11-20 11:33:12.419045] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:19:06.750 [2024-11-20 11:33:12.419150] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:19:06.750 [2024-11-20 11:33:12.419221] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:19:06.750 [2024-11-20 11:33:12.419649] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:19:07.369 done. 00:19:07.369 11:33:12 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:19:07.369 11:33:12 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:19:07.369 11:33:12 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:19:07.369 11:33:12 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:19:07.369 11:33:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:07.369 11:33:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:07.369 ************************************ 00:19:07.369 START TEST nvme_reset 00:19:07.369 ************************************ 00:19:07.369 11:33:12 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:19:07.627 Initializing NVMe Controllers 00:19:07.627 Skipping QEMU NVMe SSD at 0000:00:10.0 00:19:07.627 Skipping QEMU NVMe SSD at 0000:00:11.0 00:19:07.627 Skipping QEMU NVMe SSD at 0000:00:13.0 00:19:07.627 Skipping QEMU NVMe SSD at 0000:00:12.0 00:19:07.627 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:19:07.627 00:19:07.627 real 0m0.315s 00:19:07.627 user 0m0.106s 00:19:07.627 sys 0m0.161s 00:19:07.627 11:33:13 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:07.627 11:33:13 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:19:07.627 ************************************ 00:19:07.627 END TEST nvme_reset 00:19:07.627 ************************************ 00:19:07.627 11:33:13 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:19:07.627 11:33:13 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:07.627 11:33:13 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:07.627 11:33:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:07.627 ************************************ 00:19:07.627 START TEST nvme_identify 00:19:07.627 ************************************ 00:19:07.627 11:33:13 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:19:07.627 11:33:13 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:19:07.627 11:33:13 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:19:07.627 11:33:13 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:19:07.627 11:33:13 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:19:07.627 11:33:13 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:19:07.627 11:33:13 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:19:07.627 11:33:13 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:07.627 11:33:13 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:19:07.627 11:33:13 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:07.627 11:33:13 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:19:07.627 11:33:13 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:19:07.627 11:33:13 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:19:07.890 [2024-11-20 11:33:13.582475] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64365 terminated unexpected 00:19:07.890 ===================================================== 00:19:07.890 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:07.890 ===================================================== 00:19:07.890 Controller Capabilities/Features 00:19:07.890 ================================ 00:19:07.890 Vendor ID: 1b36 00:19:07.890 Subsystem Vendor ID: 1af4 00:19:07.890 Serial Number: 12340 00:19:07.890 Model Number: QEMU NVMe Ctrl 00:19:07.890 Firmware Version: 8.0.0 00:19:07.890 Recommended Arb Burst: 6 00:19:07.890 IEEE OUI Identifier: 00 54 52 00:19:07.890 Multi-path I/O 00:19:07.890 May have multiple subsystem ports: No 00:19:07.890 May have multiple controllers: No 00:19:07.890 Associated with SR-IOV VF: No 00:19:07.890 Max Data Transfer Size: 524288 00:19:07.890 Max Number of Namespaces: 256 00:19:07.890 Max Number of I/O Queues: 64 00:19:07.890 NVMe Specification Version (VS): 1.4 00:19:07.890 NVMe Specification Version (Identify): 1.4 00:19:07.890 Maximum Queue Entries: 2048 00:19:07.890 Contiguous Queues Required: Yes 00:19:07.890 Arbitration Mechanisms Supported 00:19:07.890 Weighted Round Robin: Not Supported 00:19:07.890 Vendor Specific: Not Supported 00:19:07.890 Reset Timeout: 7500 ms 00:19:07.890 Doorbell Stride: 4 bytes 00:19:07.890 NVM Subsystem Reset: Not Supported 00:19:07.890 Command Sets Supported 00:19:07.890 NVM Command Set: Supported 00:19:07.890 Boot Partition: Not Supported 00:19:07.890 Memory Page Size Minimum: 4096 bytes 00:19:07.890 Memory Page Size Maximum: 65536 bytes 00:19:07.890 Persistent Memory Region: Not Supported 00:19:07.890 Optional Asynchronous Events Supported 00:19:07.890 Namespace Attribute Notices: Supported 00:19:07.890 Firmware Activation Notices: Not Supported 00:19:07.890 ANA Change Notices: Not Supported 00:19:07.890 PLE Aggregate Log Change Notices: Not Supported 00:19:07.890 LBA Status Info Alert Notices: Not Supported 00:19:07.890 EGE Aggregate Log Change Notices: Not Supported 00:19:07.890 Normal NVM Subsystem Shutdown event: Not Supported 00:19:07.890 Zone Descriptor Change Notices: Not Supported 00:19:07.890 Discovery Log Change Notices: Not Supported 00:19:07.890 Controller Attributes 00:19:07.890 128-bit Host Identifier: Not Supported 00:19:07.890 Non-Operational Permissive Mode: Not Supported 00:19:07.890 NVM Sets: Not Supported 00:19:07.890 Read Recovery Levels: Not Supported 00:19:07.890 Endurance Groups: Not Supported 00:19:07.890 Predictable Latency Mode: Not Supported 00:19:07.890 Traffic Based Keep ALive: Not Supported 00:19:07.890 Namespace Granularity: Not Supported 00:19:07.890 SQ Associations: Not Supported 00:19:07.890 UUID List: Not Supported 00:19:07.890 Multi-Domain Subsystem: Not Supported 00:19:07.890 Fixed Capacity Management: Not Supported 00:19:07.890 Variable Capacity Management: Not Supported 00:19:07.890 Delete Endurance Group: Not Supported 00:19:07.890 Delete NVM Set: Not Supported 00:19:07.890 Extended LBA Formats Supported: Supported 00:19:07.890 Flexible Data Placement Supported: Not Supported 00:19:07.890 00:19:07.890 Controller Memory Buffer Support 00:19:07.890 ================================ 00:19:07.890 Supported: No 00:19:07.890 00:19:07.890 Persistent Memory Region Support 00:19:07.890 ================================ 00:19:07.890 Supported: No 00:19:07.890 00:19:07.890 Admin Command Set Attributes 00:19:07.890 ============================ 00:19:07.890 Security Send/Receive: Not Supported 00:19:07.890 Format NVM: Supported 00:19:07.890 Firmware Activate/Download: Not Supported 00:19:07.890 Namespace Management: Supported 00:19:07.890 Device Self-Test: Not Supported 00:19:07.890 Directives: Supported 00:19:07.890 NVMe-MI: Not Supported 00:19:07.890 Virtualization Management: Not Supported 00:19:07.890 Doorbell Buffer Config: Supported 00:19:07.890 Get LBA Status Capability: Not Supported 00:19:07.890 Command & Feature Lockdown Capability: Not Supported 00:19:07.890 Abort Command Limit: 4 00:19:07.890 Async Event Request Limit: 4 00:19:07.890 Number of Firmware Slots: N/A 00:19:07.890 Firmware Slot 1 Read-Only: N/A 00:19:07.890 Firmware Activation Without Reset: N/A 00:19:07.890 Multiple Update Detection Support: N/A 00:19:07.890 Firmware Update Granularity: No Information Provided 00:19:07.890 Per-Namespace SMART Log: Yes 00:19:07.890 Asymmetric Namespace Access Log Page: Not Supported 00:19:07.890 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:19:07.890 Command Effects Log Page: Supported 00:19:07.890 Get Log Page Extended Data: Supported 00:19:07.890 Telemetry Log Pages: Not Supported 00:19:07.890 Persistent Event Log Pages: Not Supported 00:19:07.891 Supported Log Pages Log Page: May Support 00:19:07.891 Commands Supported & Effects Log Page: Not Supported 00:19:07.891 Feature Identifiers & Effects Log Page:May Support 00:19:07.891 NVMe-MI Commands & Effects Log Page: May Support 00:19:07.891 Data Area 4 for Telemetry Log: Not Supported 00:19:07.891 Error Log Page Entries Supported: 1 00:19:07.891 Keep Alive: Not Supported 00:19:07.891 00:19:07.891 NVM Command Set Attributes 00:19:07.891 ========================== 00:19:07.891 Submission Queue Entry Size 00:19:07.891 Max: 64 00:19:07.891 Min: 64 00:19:07.891 Completion Queue Entry Size 00:19:07.891 Max: 16 00:19:07.891 Min: 16 00:19:07.891 Number of Namespaces: 256 00:19:07.891 Compare Command: Supported 00:19:07.891 Write Uncorrectable Command: Not Supported 00:19:07.891 Dataset Management Command: Supported 00:19:07.891 Write Zeroes Command: Supported 00:19:07.891 Set Features Save Field: Supported 00:19:07.891 Reservations: Not Supported 00:19:07.891 Timestamp: Supported 00:19:07.891 Copy: Supported 00:19:07.891 Volatile Write Cache: Present 00:19:07.891 Atomic Write Unit (Normal): 1 00:19:07.891 Atomic Write Unit (PFail): 1 00:19:07.891 Atomic Compare & Write Unit: 1 00:19:07.891 Fused Compare & Write: Not Supported 00:19:07.891 Scatter-Gather List 00:19:07.891 SGL Command Set: Supported 00:19:07.891 SGL Keyed: Not Supported 00:19:07.891 SGL Bit Bucket Descriptor: Not Supported 00:19:07.891 SGL Metadata Pointer: Not Supported 00:19:07.891 Oversized SGL: Not Supported 00:19:07.891 SGL Metadata Address: Not Supported 00:19:07.891 SGL Offset: Not Supported 00:19:07.891 Transport SGL Data Block: Not Supported 00:19:07.891 Replay Protected Memory Block: Not Supported 00:19:07.891 00:19:07.891 Firmware Slot Information 00:19:07.891 ========================= 00:19:07.891 Active slot: 1 00:19:07.891 Slot 1 Firmware Revision: 1.0 00:19:07.891 00:19:07.891 00:19:07.891 Commands Supported and Effects 00:19:07.891 ============================== 00:19:07.891 Admin Commands 00:19:07.891 -------------- 00:19:07.891 Delete I/O Submission Queue (00h): Supported 00:19:07.891 Create I/O Submission Queue (01h): Supported 00:19:07.891 Get Log Page (02h): Supported 00:19:07.891 Delete I/O Completion Queue (04h): Supported 00:19:07.891 Create I/O Completion Queue (05h): Supported 00:19:07.891 Identify (06h): Supported 00:19:07.891 Abort (08h): Supported 00:19:07.891 Set Features (09h): Supported 00:19:07.891 Get Features (0Ah): Supported 00:19:07.891 Asynchronous Event Request (0Ch): Supported 00:19:07.891 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:07.891 Directive Send (19h): Supported 00:19:07.891 Directive Receive (1Ah): Supported 00:19:07.891 Virtualization Management (1Ch): Supported 00:19:07.891 Doorbell Buffer Config (7Ch): Supported 00:19:07.891 Format NVM (80h): Supported LBA-Change 00:19:07.891 I/O Commands 00:19:07.891 ------------ 00:19:07.891 Flush (00h): Supported LBA-Change 00:19:07.891 Write (01h): Supported LBA-Change 00:19:07.891 Read (02h): Supported 00:19:07.891 Compare (05h): Supported 00:19:07.891 Write Zeroes (08h): Supported LBA-Change 00:19:07.891 Dataset Management (09h): Supported LBA-Change 00:19:07.891 Unknown (0Ch): Supported 00:19:07.891 Unknown (12h): Supported 00:19:07.891 Copy (19h): Supported LBA-Change 00:19:07.891 Unknown (1Dh): Supported LBA-Change 00:19:07.891 00:19:07.891 Error Log 00:19:07.891 ========= 00:19:07.891 00:19:07.891 Arbitration 00:19:07.891 =========== 00:19:07.891 Arbitration Burst: no limit 00:19:07.891 00:19:07.891 Power Management 00:19:07.891 ================ 00:19:07.891 Number of Power States: 1 00:19:07.891 Current Power State: Power State #0 00:19:07.891 Power State #0: 00:19:07.891 Max Power: 25.00 W 00:19:07.891 Non-Operational State: Operational 00:19:07.891 Entry Latency: 16 microseconds 00:19:07.891 Exit Latency: 4 microseconds 00:19:07.891 Relative Read Throughput: 0 00:19:07.891 Relative Read Latency: 0 00:19:07.891 Relative Write Throughput: 0 00:19:07.891 Relative Write Latency: 0 00:19:07.891 Idle Power[2024-11-20 11:33:13.583815] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64365 terminated unexpected 00:19:07.891 : Not Reported 00:19:07.891 Active Power: Not Reported 00:19:07.891 Non-Operational Permissive Mode: Not Supported 00:19:07.891 00:19:07.891 Health Information 00:19:07.891 ================== 00:19:07.891 Critical Warnings: 00:19:07.891 Available Spare Space: OK 00:19:07.891 Temperature: OK 00:19:07.891 Device Reliability: OK 00:19:07.891 Read Only: No 00:19:07.891 Volatile Memory Backup: OK 00:19:07.891 Current Temperature: 323 Kelvin (50 Celsius) 00:19:07.891 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:07.891 Available Spare: 0% 00:19:07.891 Available Spare Threshold: 0% 00:19:07.891 Life Percentage Used: 0% 00:19:07.891 Data Units Read: 649 00:19:07.891 Data Units Written: 577 00:19:07.891 Host Read Commands: 32570 00:19:07.891 Host Write Commands: 32356 00:19:07.891 Controller Busy Time: 0 minutes 00:19:07.891 Power Cycles: 0 00:19:07.891 Power On Hours: 0 hours 00:19:07.891 Unsafe Shutdowns: 0 00:19:07.891 Unrecoverable Media Errors: 0 00:19:07.891 Lifetime Error Log Entries: 0 00:19:07.891 Warning Temperature Time: 0 minutes 00:19:07.891 Critical Temperature Time: 0 minutes 00:19:07.891 00:19:07.891 Number of Queues 00:19:07.891 ================ 00:19:07.891 Number of I/O Submission Queues: 64 00:19:07.891 Number of I/O Completion Queues: 64 00:19:07.891 00:19:07.891 ZNS Specific Controller Data 00:19:07.891 ============================ 00:19:07.891 Zone Append Size Limit: 0 00:19:07.891 00:19:07.891 00:19:07.891 Active Namespaces 00:19:07.891 ================= 00:19:07.891 Namespace ID:1 00:19:07.891 Error Recovery Timeout: Unlimited 00:19:07.891 Command Set Identifier: NVM (00h) 00:19:07.891 Deallocate: Supported 00:19:07.891 Deallocated/Unwritten Error: Supported 00:19:07.891 Deallocated Read Value: All 0x00 00:19:07.891 Deallocate in Write Zeroes: Not Supported 00:19:07.891 Deallocated Guard Field: 0xFFFF 00:19:07.891 Flush: Supported 00:19:07.891 Reservation: Not Supported 00:19:07.891 Metadata Transferred as: Separate Metadata Buffer 00:19:07.891 Namespace Sharing Capabilities: Private 00:19:07.891 Size (in LBAs): 1548666 (5GiB) 00:19:07.891 Capacity (in LBAs): 1548666 (5GiB) 00:19:07.891 Utilization (in LBAs): 1548666 (5GiB) 00:19:07.891 Thin Provisioning: Not Supported 00:19:07.891 Per-NS Atomic Units: No 00:19:07.891 Maximum Single Source Range Length: 128 00:19:07.891 Maximum Copy Length: 128 00:19:07.891 Maximum Source Range Count: 128 00:19:07.891 NGUID/EUI64 Never Reused: No 00:19:07.891 Namespace Write Protected: No 00:19:07.891 Number of LBA Formats: 8 00:19:07.891 Current LBA Format: LBA Format #07 00:19:07.891 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:07.891 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:07.891 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:07.891 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:07.891 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:07.891 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:07.891 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:07.891 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:07.891 00:19:07.891 NVM Specific Namespace Data 00:19:07.891 =========================== 00:19:07.891 Logical Block Storage Tag Mask: 0 00:19:07.891 Protection Information Capabilities: 00:19:07.891 16b Guard Protection Information Storage Tag Support: No 00:19:07.891 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:07.891 Storage Tag Check Read Support: No 00:19:07.891 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.892 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.892 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.892 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.892 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.892 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.892 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.892 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.892 ===================================================== 00:19:07.892 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:19:07.892 ===================================================== 00:19:07.892 Controller Capabilities/Features 00:19:07.892 ================================ 00:19:07.892 Vendor ID: 1b36 00:19:07.892 Subsystem Vendor ID: 1af4 00:19:07.892 Serial Number: 12341 00:19:07.892 Model Number: QEMU NVMe Ctrl 00:19:07.892 Firmware Version: 8.0.0 00:19:07.892 Recommended Arb Burst: 6 00:19:07.892 IEEE OUI Identifier: 00 54 52 00:19:07.892 Multi-path I/O 00:19:07.892 May have multiple subsystem ports: No 00:19:07.892 May have multiple controllers: No 00:19:07.892 Associated with SR-IOV VF: No 00:19:07.892 Max Data Transfer Size: 524288 00:19:07.892 Max Number of Namespaces: 256 00:19:07.892 Max Number of I/O Queues: 64 00:19:07.892 NVMe Specification Version (VS): 1.4 00:19:07.892 NVMe Specification Version (Identify): 1.4 00:19:07.892 Maximum Queue Entries: 2048 00:19:07.892 Contiguous Queues Required: Yes 00:19:07.892 Arbitration Mechanisms Supported 00:19:07.892 Weighted Round Robin: Not Supported 00:19:07.892 Vendor Specific: Not Supported 00:19:07.892 Reset Timeout: 7500 ms 00:19:07.892 Doorbell Stride: 4 bytes 00:19:07.892 NVM Subsystem Reset: Not Supported 00:19:07.892 Command Sets Supported 00:19:07.892 NVM Command Set: Supported 00:19:07.892 Boot Partition: Not Supported 00:19:07.892 Memory Page Size Minimum: 4096 bytes 00:19:07.892 Memory Page Size Maximum: 65536 bytes 00:19:07.892 Persistent Memory Region: Not Supported 00:19:07.892 Optional Asynchronous Events Supported 00:19:07.892 Namespace Attribute Notices: Supported 00:19:07.892 Firmware Activation Notices: Not Supported 00:19:07.892 ANA Change Notices: Not Supported 00:19:07.892 PLE Aggregate Log Change Notices: Not Supported 00:19:07.892 LBA Status Info Alert Notices: Not Supported 00:19:07.892 EGE Aggregate Log Change Notices: Not Supported 00:19:07.892 Normal NVM Subsystem Shutdown event: Not Supported 00:19:07.892 Zone Descriptor Change Notices: Not Supported 00:19:07.892 Discovery Log Change Notices: Not Supported 00:19:07.892 Controller Attributes 00:19:07.892 128-bit Host Identifier: Not Supported 00:19:07.892 Non-Operational Permissive Mode: Not Supported 00:19:07.892 NVM Sets: Not Supported 00:19:07.892 Read Recovery Levels: Not Supported 00:19:07.892 Endurance Groups: Not Supported 00:19:07.892 Predictable Latency Mode: Not Supported 00:19:07.892 Traffic Based Keep ALive: Not Supported 00:19:07.892 Namespace Granularity: Not Supported 00:19:07.892 SQ Associations: Not Supported 00:19:07.892 UUID List: Not Supported 00:19:07.892 Multi-Domain Subsystem: Not Supported 00:19:07.892 Fixed Capacity Management: Not Supported 00:19:07.892 Variable Capacity Management: Not Supported 00:19:07.892 Delete Endurance Group: Not Supported 00:19:07.892 Delete NVM Set: Not Supported 00:19:07.892 Extended LBA Formats Supported: Supported 00:19:07.892 Flexible Data Placement Supported: Not Supported 00:19:07.892 00:19:07.892 Controller Memory Buffer Support 00:19:07.892 ================================ 00:19:07.892 Supported: No 00:19:07.892 00:19:07.892 Persistent Memory Region Support 00:19:07.892 ================================ 00:19:07.892 Supported: No 00:19:07.892 00:19:07.892 Admin Command Set Attributes 00:19:07.892 ============================ 00:19:07.892 Security Send/Receive: Not Supported 00:19:07.892 Format NVM: Supported 00:19:07.892 Firmware Activate/Download: Not Supported 00:19:07.892 Namespace Management: Supported 00:19:07.892 Device Self-Test: Not Supported 00:19:07.892 Directives: Supported 00:19:07.892 NVMe-MI: Not Supported 00:19:07.892 Virtualization Management: Not Supported 00:19:07.892 Doorbell Buffer Config: Supported 00:19:07.892 Get LBA Status Capability: Not Supported 00:19:07.892 Command & Feature Lockdown Capability: Not Supported 00:19:07.892 Abort Command Limit: 4 00:19:07.892 Async Event Request Limit: 4 00:19:07.892 Number of Firmware Slots: N/A 00:19:07.892 Firmware Slot 1 Read-Only: N/A 00:19:07.892 Firmware Activation Without Reset: N/A 00:19:07.892 Multiple Update Detection Support: N/A 00:19:07.892 Firmware Update Granularity: No Information Provided 00:19:07.892 Per-Namespace SMART Log: Yes 00:19:07.892 Asymmetric Namespace Access Log Page: Not Supported 00:19:07.892 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:19:07.892 Command Effects Log Page: Supported 00:19:07.892 Get Log Page Extended Data: Supported 00:19:07.892 Telemetry Log Pages: Not Supported 00:19:07.892 Persistent Event Log Pages: Not Supported 00:19:07.892 Supported Log Pages Log Page: May Support 00:19:07.892 Commands Supported & Effects Log Page: Not Supported 00:19:07.892 Feature Identifiers & Effects Log Page:May Support 00:19:07.892 NVMe-MI Commands & Effects Log Page: May Support 00:19:07.892 Data Area 4 for Telemetry Log: Not Supported 00:19:07.892 Error Log Page Entries Supported: 1 00:19:07.892 Keep Alive: Not Supported 00:19:07.892 00:19:07.892 NVM Command Set Attributes 00:19:07.892 ========================== 00:19:07.892 Submission Queue Entry Size 00:19:07.892 Max: 64 00:19:07.892 Min: 64 00:19:07.892 Completion Queue Entry Size 00:19:07.892 Max: 16 00:19:07.892 Min: 16 00:19:07.892 Number of Namespaces: 256 00:19:07.892 Compare Command: Supported 00:19:07.892 Write Uncorrectable Command: Not Supported 00:19:07.892 Dataset Management Command: Supported 00:19:07.892 Write Zeroes Command: Supported 00:19:07.892 Set Features Save Field: Supported 00:19:07.892 Reservations: Not Supported 00:19:07.892 Timestamp: Supported 00:19:07.892 Copy: Supported 00:19:07.892 Volatile Write Cache: Present 00:19:07.892 Atomic Write Unit (Normal): 1 00:19:07.892 Atomic Write Unit (PFail): 1 00:19:07.892 Atomic Compare & Write Unit: 1 00:19:07.892 Fused Compare & Write: Not Supported 00:19:07.892 Scatter-Gather List 00:19:07.892 SGL Command Set: Supported 00:19:07.892 SGL Keyed: Not Supported 00:19:07.892 SGL Bit Bucket Descriptor: Not Supported 00:19:07.892 SGL Metadata Pointer: Not Supported 00:19:07.892 Oversized SGL: Not Supported 00:19:07.892 SGL Metadata Address: Not Supported 00:19:07.892 SGL Offset: Not Supported 00:19:07.892 Transport SGL Data Block: Not Supported 00:19:07.892 Replay Protected Memory Block: Not Supported 00:19:07.892 00:19:07.892 Firmware Slot Information 00:19:07.892 ========================= 00:19:07.892 Active slot: 1 00:19:07.892 Slot 1 Firmware Revision: 1.0 00:19:07.892 00:19:07.892 00:19:07.892 Commands Supported and Effects 00:19:07.892 ============================== 00:19:07.892 Admin Commands 00:19:07.892 -------------- 00:19:07.892 Delete I/O Submission Queue (00h): Supported 00:19:07.892 Create I/O Submission Queue (01h): Supported 00:19:07.892 Get Log Page (02h): Supported 00:19:07.892 Delete I/O Completion Queue (04h): Supported 00:19:07.892 Create I/O Completion Queue (05h): Supported 00:19:07.892 Identify (06h): Supported 00:19:07.892 Abort (08h): Supported 00:19:07.892 Set Features (09h): Supported 00:19:07.892 Get Features (0Ah): Supported 00:19:07.892 Asynchronous Event Request (0Ch): Supported 00:19:07.892 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:07.892 Directive Send (19h): Supported 00:19:07.892 Directive Receive (1Ah): Supported 00:19:07.892 Virtualization Management (1Ch): Supported 00:19:07.892 Doorbell Buffer Config (7Ch): Supported 00:19:07.892 Format NVM (80h): Supported LBA-Change 00:19:07.892 I/O Commands 00:19:07.892 ------------ 00:19:07.892 Flush (00h): Supported LBA-Change 00:19:07.892 Write (01h): Supported LBA-Change 00:19:07.892 Read (02h): Supported 00:19:07.892 Compare (05h): Supported 00:19:07.892 Write Zeroes (08h): Supported LBA-Change 00:19:07.892 Dataset Management (09h): Supported LBA-Change 00:19:07.893 Unknown (0Ch): Supported 00:19:07.893 Unknown (12h): Supported 00:19:07.893 Copy (19h): Supported LBA-Change 00:19:07.893 Unknown (1Dh): Supported LBA-Change 00:19:07.893 00:19:07.893 Error Log 00:19:07.893 ========= 00:19:07.893 00:19:07.893 Arbitration 00:19:07.893 =========== 00:19:07.893 Arbitration Burst: no limit 00:19:07.893 00:19:07.893 Power Management 00:19:07.893 ================ 00:19:07.893 Number of Power States: 1 00:19:07.893 Current Power State: Power State #0 00:19:07.893 Power State #0: 00:19:07.893 Max Power: 25.00 W 00:19:07.893 Non-Operational State: Operational 00:19:07.893 Entry Latency: 16 microseconds 00:19:07.893 Exit Latency: 4 microseconds 00:19:07.893 Relative Read Throughput: 0 00:19:07.893 Relative Read Latency: 0 00:19:07.893 Relative Write Throughput: 0 00:19:07.893 Relative Write Latency: 0 00:19:07.893 Idle Power: Not Reported 00:19:07.893 Active Power: Not Reported 00:19:07.893 Non-Operational Permissive Mode: Not Supported 00:19:07.893 00:19:07.893 Health Information 00:19:07.893 ================== 00:19:07.893 Critical Warnings: 00:19:07.893 Available Spare Space: OK 00:19:07.893 Temperature: [2024-11-20 11:33:13.584976] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64365 terminated unexpected 00:19:07.893 OK 00:19:07.893 Device Reliability: OK 00:19:07.893 Read Only: No 00:19:07.893 Volatile Memory Backup: OK 00:19:07.893 Current Temperature: 323 Kelvin (50 Celsius) 00:19:07.893 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:07.893 Available Spare: 0% 00:19:07.893 Available Spare Threshold: 0% 00:19:07.893 Life Percentage Used: 0% 00:19:07.893 Data Units Read: 1008 00:19:07.893 Data Units Written: 875 00:19:07.893 Host Read Commands: 48257 00:19:07.893 Host Write Commands: 47045 00:19:07.893 Controller Busy Time: 0 minutes 00:19:07.893 Power Cycles: 0 00:19:07.893 Power On Hours: 0 hours 00:19:07.893 Unsafe Shutdowns: 0 00:19:07.893 Unrecoverable Media Errors: 0 00:19:07.893 Lifetime Error Log Entries: 0 00:19:07.893 Warning Temperature Time: 0 minutes 00:19:07.893 Critical Temperature Time: 0 minutes 00:19:07.893 00:19:07.893 Number of Queues 00:19:07.893 ================ 00:19:07.893 Number of I/O Submission Queues: 64 00:19:07.893 Number of I/O Completion Queues: 64 00:19:07.893 00:19:07.893 ZNS Specific Controller Data 00:19:07.893 ============================ 00:19:07.893 Zone Append Size Limit: 0 00:19:07.893 00:19:07.893 00:19:07.893 Active Namespaces 00:19:07.893 ================= 00:19:07.893 Namespace ID:1 00:19:07.893 Error Recovery Timeout: Unlimited 00:19:07.893 Command Set Identifier: NVM (00h) 00:19:07.893 Deallocate: Supported 00:19:07.893 Deallocated/Unwritten Error: Supported 00:19:07.893 Deallocated Read Value: All 0x00 00:19:07.893 Deallocate in Write Zeroes: Not Supported 00:19:07.893 Deallocated Guard Field: 0xFFFF 00:19:07.893 Flush: Supported 00:19:07.893 Reservation: Not Supported 00:19:07.893 Namespace Sharing Capabilities: Private 00:19:07.893 Size (in LBAs): 1310720 (5GiB) 00:19:07.893 Capacity (in LBAs): 1310720 (5GiB) 00:19:07.893 Utilization (in LBAs): 1310720 (5GiB) 00:19:07.893 Thin Provisioning: Not Supported 00:19:07.893 Per-NS Atomic Units: No 00:19:07.893 Maximum Single Source Range Length: 128 00:19:07.893 Maximum Copy Length: 128 00:19:07.893 Maximum Source Range Count: 128 00:19:07.893 NGUID/EUI64 Never Reused: No 00:19:07.893 Namespace Write Protected: No 00:19:07.893 Number of LBA Formats: 8 00:19:07.893 Current LBA Format: LBA Format #04 00:19:07.893 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:07.893 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:07.893 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:07.893 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:07.893 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:07.893 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:07.893 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:07.893 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:07.893 00:19:07.893 NVM Specific Namespace Data 00:19:07.893 =========================== 00:19:07.893 Logical Block Storage Tag Mask: 0 00:19:07.893 Protection Information Capabilities: 00:19:07.893 16b Guard Protection Information Storage Tag Support: No 00:19:07.893 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:07.893 Storage Tag Check Read Support: No 00:19:07.893 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.893 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.893 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.893 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.893 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.893 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.893 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.893 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.893 ===================================================== 00:19:07.893 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:19:07.893 ===================================================== 00:19:07.893 Controller Capabilities/Features 00:19:07.893 ================================ 00:19:07.893 Vendor ID: 1b36 00:19:07.893 Subsystem Vendor ID: 1af4 00:19:07.893 Serial Number: 12343 00:19:07.893 Model Number: QEMU NVMe Ctrl 00:19:07.893 Firmware Version: 8.0.0 00:19:07.893 Recommended Arb Burst: 6 00:19:07.893 IEEE OUI Identifier: 00 54 52 00:19:07.893 Multi-path I/O 00:19:07.893 May have multiple subsystem ports: No 00:19:07.893 May have multiple controllers: Yes 00:19:07.893 Associated with SR-IOV VF: No 00:19:07.893 Max Data Transfer Size: 524288 00:19:07.893 Max Number of Namespaces: 256 00:19:07.893 Max Number of I/O Queues: 64 00:19:07.893 NVMe Specification Version (VS): 1.4 00:19:07.893 NVMe Specification Version (Identify): 1.4 00:19:07.893 Maximum Queue Entries: 2048 00:19:07.893 Contiguous Queues Required: Yes 00:19:07.893 Arbitration Mechanisms Supported 00:19:07.893 Weighted Round Robin: Not Supported 00:19:07.893 Vendor Specific: Not Supported 00:19:07.893 Reset Timeout: 7500 ms 00:19:07.893 Doorbell Stride: 4 bytes 00:19:07.893 NVM Subsystem Reset: Not Supported 00:19:07.893 Command Sets Supported 00:19:07.893 NVM Command Set: Supported 00:19:07.893 Boot Partition: Not Supported 00:19:07.893 Memory Page Size Minimum: 4096 bytes 00:19:07.893 Memory Page Size Maximum: 65536 bytes 00:19:07.893 Persistent Memory Region: Not Supported 00:19:07.893 Optional Asynchronous Events Supported 00:19:07.893 Namespace Attribute Notices: Supported 00:19:07.893 Firmware Activation Notices: Not Supported 00:19:07.893 ANA Change Notices: Not Supported 00:19:07.893 PLE Aggregate Log Change Notices: Not Supported 00:19:07.893 LBA Status Info Alert Notices: Not Supported 00:19:07.893 EGE Aggregate Log Change Notices: Not Supported 00:19:07.893 Normal NVM Subsystem Shutdown event: Not Supported 00:19:07.893 Zone Descriptor Change Notices: Not Supported 00:19:07.893 Discovery Log Change Notices: Not Supported 00:19:07.893 Controller Attributes 00:19:07.893 128-bit Host Identifier: Not Supported 00:19:07.893 Non-Operational Permissive Mode: Not Supported 00:19:07.893 NVM Sets: Not Supported 00:19:07.893 Read Recovery Levels: Not Supported 00:19:07.893 Endurance Groups: Supported 00:19:07.893 Predictable Latency Mode: Not Supported 00:19:07.893 Traffic Based Keep ALive: Not Supported 00:19:07.893 Namespace Granularity: Not Supported 00:19:07.893 SQ Associations: Not Supported 00:19:07.893 UUID List: Not Supported 00:19:07.893 Multi-Domain Subsystem: Not Supported 00:19:07.893 Fixed Capacity Management: Not Supported 00:19:07.893 Variable Capacity Management: Not Supported 00:19:07.893 Delete Endurance Group: Not Supported 00:19:07.893 Delete NVM Set: Not Supported 00:19:07.893 Extended LBA Formats Supported: Supported 00:19:07.893 Flexible Data Placement Supported: Supported 00:19:07.893 00:19:07.893 Controller Memory Buffer Support 00:19:07.893 ================================ 00:19:07.893 Supported: No 00:19:07.893 00:19:07.893 Persistent Memory Region Support 00:19:07.893 ================================ 00:19:07.893 Supported: No 00:19:07.893 00:19:07.893 Admin Command Set Attributes 00:19:07.893 ============================ 00:19:07.893 Security Send/Receive: Not Supported 00:19:07.893 Format NVM: Supported 00:19:07.893 Firmware Activate/Download: Not Supported 00:19:07.893 Namespace Management: Supported 00:19:07.893 Device Self-Test: Not Supported 00:19:07.894 Directives: Supported 00:19:07.894 NVMe-MI: Not Supported 00:19:07.894 Virtualization Management: Not Supported 00:19:07.894 Doorbell Buffer Config: Supported 00:19:07.894 Get LBA Status Capability: Not Supported 00:19:07.894 Command & Feature Lockdown Capability: Not Supported 00:19:07.894 Abort Command Limit: 4 00:19:07.894 Async Event Request Limit: 4 00:19:07.894 Number of Firmware Slots: N/A 00:19:07.894 Firmware Slot 1 Read-Only: N/A 00:19:07.894 Firmware Activation Without Reset: N/A 00:19:07.894 Multiple Update Detection Support: N/A 00:19:07.894 Firmware Update Granularity: No Information Provided 00:19:07.894 Per-Namespace SMART Log: Yes 00:19:07.894 Asymmetric Namespace Access Log Page: Not Supported 00:19:07.894 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:19:07.894 Command Effects Log Page: Supported 00:19:07.894 Get Log Page Extended Data: Supported 00:19:07.894 Telemetry Log Pages: Not Supported 00:19:07.894 Persistent Event Log Pages: Not Supported 00:19:07.894 Supported Log Pages Log Page: May Support 00:19:07.894 Commands Supported & Effects Log Page: Not Supported 00:19:07.894 Feature Identifiers & Effects Log Page:May Support 00:19:07.894 NVMe-MI Commands & Effects Log Page: May Support 00:19:07.894 Data Area 4 for Telemetry Log: Not Supported 00:19:07.894 Error Log Page Entries Supported: 1 00:19:07.894 Keep Alive: Not Supported 00:19:07.894 00:19:07.894 NVM Command Set Attributes 00:19:07.894 ========================== 00:19:07.894 Submission Queue Entry Size 00:19:07.894 Max: 64 00:19:07.894 Min: 64 00:19:07.894 Completion Queue Entry Size 00:19:07.894 Max: 16 00:19:07.894 Min: 16 00:19:07.894 Number of Namespaces: 256 00:19:07.894 Compare Command: Supported 00:19:07.894 Write Uncorrectable Command: Not Supported 00:19:07.894 Dataset Management Command: Supported 00:19:07.894 Write Zeroes Command: Supported 00:19:07.894 Set Features Save Field: Supported 00:19:07.894 Reservations: Not Supported 00:19:07.894 Timestamp: Supported 00:19:07.894 Copy: Supported 00:19:07.894 Volatile Write Cache: Present 00:19:07.894 Atomic Write Unit (Normal): 1 00:19:07.894 Atomic Write Unit (PFail): 1 00:19:07.894 Atomic Compare & Write Unit: 1 00:19:07.894 Fused Compare & Write: Not Supported 00:19:07.894 Scatter-Gather List 00:19:07.894 SGL Command Set: Supported 00:19:07.894 SGL Keyed: Not Supported 00:19:07.894 SGL Bit Bucket Descriptor: Not Supported 00:19:07.894 SGL Metadata Pointer: Not Supported 00:19:07.894 Oversized SGL: Not Supported 00:19:07.894 SGL Metadata Address: Not Supported 00:19:07.894 SGL Offset: Not Supported 00:19:07.894 Transport SGL Data Block: Not Supported 00:19:07.894 Replay Protected Memory Block: Not Supported 00:19:07.894 00:19:07.894 Firmware Slot Information 00:19:07.894 ========================= 00:19:07.894 Active slot: 1 00:19:07.894 Slot 1 Firmware Revision: 1.0 00:19:07.894 00:19:07.894 00:19:07.894 Commands Supported and Effects 00:19:07.894 ============================== 00:19:07.894 Admin Commands 00:19:07.894 -------------- 00:19:07.894 Delete I/O Submission Queue (00h): Supported 00:19:07.894 Create I/O Submission Queue (01h): Supported 00:19:07.894 Get Log Page (02h): Supported 00:19:07.894 Delete I/O Completion Queue (04h): Supported 00:19:07.894 Create I/O Completion Queue (05h): Supported 00:19:07.894 Identify (06h): Supported 00:19:07.894 Abort (08h): Supported 00:19:07.894 Set Features (09h): Supported 00:19:07.894 Get Features (0Ah): Supported 00:19:07.894 Asynchronous Event Request (0Ch): Supported 00:19:07.894 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:07.894 Directive Send (19h): Supported 00:19:07.894 Directive Receive (1Ah): Supported 00:19:07.894 Virtualization Management (1Ch): Supported 00:19:07.894 Doorbell Buffer Config (7Ch): Supported 00:19:07.894 Format NVM (80h): Supported LBA-Change 00:19:07.894 I/O Commands 00:19:07.894 ------------ 00:19:07.894 Flush (00h): Supported LBA-Change 00:19:07.894 Write (01h): Supported LBA-Change 00:19:07.894 Read (02h): Supported 00:19:07.894 Compare (05h): Supported 00:19:07.894 Write Zeroes (08h): Supported LBA-Change 00:19:07.894 Dataset Management (09h): Supported LBA-Change 00:19:07.894 Unknown (0Ch): Supported 00:19:07.894 Unknown (12h): Supported 00:19:07.894 Copy (19h): Supported LBA-Change 00:19:07.894 Unknown (1Dh): Supported LBA-Change 00:19:07.894 00:19:07.894 Error Log 00:19:07.894 ========= 00:19:07.894 00:19:07.894 Arbitration 00:19:07.894 =========== 00:19:07.894 Arbitration Burst: no limit 00:19:07.894 00:19:07.894 Power Management 00:19:07.894 ================ 00:19:07.894 Number of Power States: 1 00:19:07.894 Current Power State: Power State #0 00:19:07.894 Power State #0: 00:19:07.894 Max Power: 25.00 W 00:19:07.894 Non-Operational State: Operational 00:19:07.894 Entry Latency: 16 microseconds 00:19:07.894 Exit Latency: 4 microseconds 00:19:07.894 Relative Read Throughput: 0 00:19:07.894 Relative Read Latency: 0 00:19:07.894 Relative Write Throughput: 0 00:19:07.894 Relative Write Latency: 0 00:19:07.894 Idle Power: Not Reported 00:19:07.894 Active Power: Not Reported 00:19:07.894 Non-Operational Permissive Mode: Not Supported 00:19:07.894 00:19:07.894 Health Information 00:19:07.894 ================== 00:19:07.894 Critical Warnings: 00:19:07.894 Available Spare Space: OK 00:19:07.894 Temperature: OK 00:19:07.894 Device Reliability: OK 00:19:07.894 Read Only: No 00:19:07.894 Volatile Memory Backup: OK 00:19:07.894 Current Temperature: 323 Kelvin (50 Celsius) 00:19:07.894 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:07.894 Available Spare: 0% 00:19:07.894 Available Spare Threshold: 0% 00:19:07.894 Life Percentage Used: 0% 00:19:07.894 Data Units Read: 775 00:19:07.894 Data Units Written: 704 00:19:07.894 Host Read Commands: 34031 00:19:07.894 Host Write Commands: 33454 00:19:07.894 Controller Busy Time: 0 minutes 00:19:07.894 Power Cycles: 0 00:19:07.894 Power On Hours: 0 hours 00:19:07.894 Unsafe Shutdowns: 0 00:19:07.894 Unrecoverable Media Errors: 0 00:19:07.894 Lifetime Error Log Entries: 0 00:19:07.894 Warning Temperature Time: 0 minutes 00:19:07.894 Critical Temperature Time: 0 minutes 00:19:07.894 00:19:07.894 Number of Queues 00:19:07.894 ================ 00:19:07.894 Number of I/O Submission Queues: 64 00:19:07.894 Number of I/O Completion Queues: 64 00:19:07.894 00:19:07.894 ZNS Specific Controller Data 00:19:07.894 ============================ 00:19:07.894 Zone Append Size Limit: 0 00:19:07.894 00:19:07.894 00:19:07.894 Active Namespaces 00:19:07.894 ================= 00:19:07.894 Namespace ID:1 00:19:07.894 Error Recovery Timeout: Unlimited 00:19:07.894 Command Set Identifier: NVM (00h) 00:19:07.894 Deallocate: Supported 00:19:07.894 Deallocated/Unwritten Error: Supported 00:19:07.894 Deallocated Read Value: All 0x00 00:19:07.894 Deallocate in Write Zeroes: Not Supported 00:19:07.894 Deallocated Guard Field: 0xFFFF 00:19:07.894 Flush: Supported 00:19:07.894 Reservation: Not Supported 00:19:07.894 Namespace Sharing Capabilities: Multiple Controllers 00:19:07.894 Size (in LBAs): 262144 (1GiB) 00:19:07.894 Capacity (in LBAs): 262144 (1GiB) 00:19:07.894 Utilization (in LBAs): 262144 (1GiB) 00:19:07.894 Thin Provisioning: Not Supported 00:19:07.894 Per-NS Atomic Units: No 00:19:07.894 Maximum Single Source Range Length: 128 00:19:07.894 Maximum Copy Length: 128 00:19:07.894 Maximum Source Range Count: 128 00:19:07.894 NGUID/EUI64 Never Reused: No 00:19:07.894 Namespace Write Protected: No 00:19:07.894 Endurance group ID: 1 00:19:07.894 Number of LBA Formats: 8 00:19:07.894 Current LBA Format: LBA Format #04 00:19:07.894 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:07.894 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:07.894 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:07.894 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:07.894 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:07.894 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:07.894 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:07.894 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:07.894 00:19:07.894 Get Feature FDP: 00:19:07.894 ================ 00:19:07.894 Enabled: Yes 00:19:07.894 FDP configuration index: 0 00:19:07.894 00:19:07.894 FDP configurations log page 00:19:07.894 =========================== 00:19:07.894 Number of FDP configurations: 1 00:19:07.894 Version: 0 00:19:07.894 Size: 112 00:19:07.895 FDP Configuration Descriptor: 0 00:19:07.895 Descriptor Size: 96 00:19:07.895 Reclaim Group Identifier format: 2 00:19:07.895 FDP Volatile Write Cache: Not Present 00:19:07.895 FDP Configuration: Valid 00:19:07.895 Vendor Specific Size: 0 00:19:07.895 Number of Reclaim Groups: 2 00:19:07.895 Number of Recalim Unit Handles: 8 00:19:07.895 Max Placement Identifiers: 128 00:19:07.895 Number of Namespaces Suppprted: 256 00:19:07.895 Reclaim unit Nominal Size: 6000000 bytes 00:19:07.895 Estimated Reclaim Unit Time Limit: Not Reported 00:19:07.895 RUH Desc #000: RUH Type: Initially Isolated 00:19:07.895 RUH Desc #001: RUH Type: Initially Isolated 00:19:07.895 RUH Desc #002: RUH Type: Initially Isolated 00:19:07.895 RUH Desc #003: RUH Type: Initially Isolated 00:19:07.895 RUH Desc #004: RUH Type: Initially Isolated 00:19:07.895 RUH Desc #005: RUH Type: Initially Isolated 00:19:07.895 RUH Desc #006: RUH Type: Initially Isolated 00:19:07.895 RUH Desc #007: RUH Type: Initially Isolated 00:19:07.895 00:19:07.895 FDP reclaim unit handle usage log page 00:19:07.895 ====================================== 00:19:07.895 Number of Reclaim Unit Handles: 8 00:19:07.895 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:19:07.895 RUH Usage Desc #001: RUH Attributes: Unused 00:19:07.895 RUH Usage Desc #002: RUH Attributes: Unused 00:19:07.895 RUH Usage Desc #003: RUH Attributes: Unused 00:19:07.895 RUH Usage Desc #004: RUH Attributes: Unused 00:19:07.895 RUH Usage Desc #005: RUH Attributes: Unused 00:19:07.895 RUH Usage Desc #006: RUH Attributes: Unused 00:19:07.895 RUH Usage Desc #007: RUH Attributes: Unused 00:19:07.895 00:19:07.895 FDP statistics log page 00:19:07.895 ======================= 00:19:07.895 Host bytes with metadata written: 442998784 00:19:07.895 Medi[2024-11-20 11:33:13.586828] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64365 terminated unexpected 00:19:07.895 a bytes with metadata written: 443064320 00:19:07.895 Media bytes erased: 0 00:19:07.895 00:19:07.895 FDP events log page 00:19:07.895 =================== 00:19:07.895 Number of FDP events: 0 00:19:07.895 00:19:07.895 NVM Specific Namespace Data 00:19:07.895 =========================== 00:19:07.895 Logical Block Storage Tag Mask: 0 00:19:07.895 Protection Information Capabilities: 00:19:07.895 16b Guard Protection Information Storage Tag Support: No 00:19:07.895 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:07.895 Storage Tag Check Read Support: No 00:19:07.895 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.895 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.895 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.895 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.895 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.895 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.895 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.895 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.895 ===================================================== 00:19:07.895 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:19:07.895 ===================================================== 00:19:07.895 Controller Capabilities/Features 00:19:07.895 ================================ 00:19:07.895 Vendor ID: 1b36 00:19:07.895 Subsystem Vendor ID: 1af4 00:19:07.895 Serial Number: 12342 00:19:07.895 Model Number: QEMU NVMe Ctrl 00:19:07.895 Firmware Version: 8.0.0 00:19:07.895 Recommended Arb Burst: 6 00:19:07.895 IEEE OUI Identifier: 00 54 52 00:19:07.895 Multi-path I/O 00:19:07.895 May have multiple subsystem ports: No 00:19:07.895 May have multiple controllers: No 00:19:07.895 Associated with SR-IOV VF: No 00:19:07.895 Max Data Transfer Size: 524288 00:19:07.895 Max Number of Namespaces: 256 00:19:07.895 Max Number of I/O Queues: 64 00:19:07.895 NVMe Specification Version (VS): 1.4 00:19:07.895 NVMe Specification Version (Identify): 1.4 00:19:07.895 Maximum Queue Entries: 2048 00:19:07.895 Contiguous Queues Required: Yes 00:19:07.895 Arbitration Mechanisms Supported 00:19:07.895 Weighted Round Robin: Not Supported 00:19:07.895 Vendor Specific: Not Supported 00:19:07.895 Reset Timeout: 7500 ms 00:19:07.895 Doorbell Stride: 4 bytes 00:19:07.895 NVM Subsystem Reset: Not Supported 00:19:07.895 Command Sets Supported 00:19:07.895 NVM Command Set: Supported 00:19:07.895 Boot Partition: Not Supported 00:19:07.895 Memory Page Size Minimum: 4096 bytes 00:19:07.895 Memory Page Size Maximum: 65536 bytes 00:19:07.895 Persistent Memory Region: Not Supported 00:19:07.895 Optional Asynchronous Events Supported 00:19:07.895 Namespace Attribute Notices: Supported 00:19:07.895 Firmware Activation Notices: Not Supported 00:19:07.895 ANA Change Notices: Not Supported 00:19:07.895 PLE Aggregate Log Change Notices: Not Supported 00:19:07.895 LBA Status Info Alert Notices: Not Supported 00:19:07.895 EGE Aggregate Log Change Notices: Not Supported 00:19:07.895 Normal NVM Subsystem Shutdown event: Not Supported 00:19:07.895 Zone Descriptor Change Notices: Not Supported 00:19:07.895 Discovery Log Change Notices: Not Supported 00:19:07.895 Controller Attributes 00:19:07.895 128-bit Host Identifier: Not Supported 00:19:07.895 Non-Operational Permissive Mode: Not Supported 00:19:07.895 NVM Sets: Not Supported 00:19:07.895 Read Recovery Levels: Not Supported 00:19:07.895 Endurance Groups: Not Supported 00:19:07.895 Predictable Latency Mode: Not Supported 00:19:07.895 Traffic Based Keep ALive: Not Supported 00:19:07.895 Namespace Granularity: Not Supported 00:19:07.895 SQ Associations: Not Supported 00:19:07.895 UUID List: Not Supported 00:19:07.895 Multi-Domain Subsystem: Not Supported 00:19:07.895 Fixed Capacity Management: Not Supported 00:19:07.895 Variable Capacity Management: Not Supported 00:19:07.895 Delete Endurance Group: Not Supported 00:19:07.895 Delete NVM Set: Not Supported 00:19:07.895 Extended LBA Formats Supported: Supported 00:19:07.895 Flexible Data Placement Supported: Not Supported 00:19:07.895 00:19:07.895 Controller Memory Buffer Support 00:19:07.895 ================================ 00:19:07.895 Supported: No 00:19:07.895 00:19:07.895 Persistent Memory Region Support 00:19:07.895 ================================ 00:19:07.895 Supported: No 00:19:07.895 00:19:07.895 Admin Command Set Attributes 00:19:07.895 ============================ 00:19:07.895 Security Send/Receive: Not Supported 00:19:07.895 Format NVM: Supported 00:19:07.895 Firmware Activate/Download: Not Supported 00:19:07.895 Namespace Management: Supported 00:19:07.895 Device Self-Test: Not Supported 00:19:07.895 Directives: Supported 00:19:07.895 NVMe-MI: Not Supported 00:19:07.895 Virtualization Management: Not Supported 00:19:07.895 Doorbell Buffer Config: Supported 00:19:07.895 Get LBA Status Capability: Not Supported 00:19:07.895 Command & Feature Lockdown Capability: Not Supported 00:19:07.895 Abort Command Limit: 4 00:19:07.895 Async Event Request Limit: 4 00:19:07.895 Number of Firmware Slots: N/A 00:19:07.895 Firmware Slot 1 Read-Only: N/A 00:19:07.895 Firmware Activation Without Reset: N/A 00:19:07.895 Multiple Update Detection Support: N/A 00:19:07.895 Firmware Update Granularity: No Information Provided 00:19:07.896 Per-Namespace SMART Log: Yes 00:19:07.896 Asymmetric Namespace Access Log Page: Not Supported 00:19:07.896 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:19:07.896 Command Effects Log Page: Supported 00:19:07.896 Get Log Page Extended Data: Supported 00:19:07.896 Telemetry Log Pages: Not Supported 00:19:07.896 Persistent Event Log Pages: Not Supported 00:19:07.896 Supported Log Pages Log Page: May Support 00:19:07.896 Commands Supported & Effects Log Page: Not Supported 00:19:07.896 Feature Identifiers & Effects Log Page:May Support 00:19:07.896 NVMe-MI Commands & Effects Log Page: May Support 00:19:07.896 Data Area 4 for Telemetry Log: Not Supported 00:19:07.896 Error Log Page Entries Supported: 1 00:19:07.896 Keep Alive: Not Supported 00:19:07.896 00:19:07.896 NVM Command Set Attributes 00:19:07.896 ========================== 00:19:07.896 Submission Queue Entry Size 00:19:07.896 Max: 64 00:19:07.896 Min: 64 00:19:07.896 Completion Queue Entry Size 00:19:07.896 Max: 16 00:19:07.896 Min: 16 00:19:07.896 Number of Namespaces: 256 00:19:07.896 Compare Command: Supported 00:19:07.896 Write Uncorrectable Command: Not Supported 00:19:07.896 Dataset Management Command: Supported 00:19:07.896 Write Zeroes Command: Supported 00:19:07.896 Set Features Save Field: Supported 00:19:07.896 Reservations: Not Supported 00:19:07.896 Timestamp: Supported 00:19:07.896 Copy: Supported 00:19:07.896 Volatile Write Cache: Present 00:19:07.896 Atomic Write Unit (Normal): 1 00:19:07.896 Atomic Write Unit (PFail): 1 00:19:07.896 Atomic Compare & Write Unit: 1 00:19:07.896 Fused Compare & Write: Not Supported 00:19:07.896 Scatter-Gather List 00:19:07.896 SGL Command Set: Supported 00:19:07.896 SGL Keyed: Not Supported 00:19:07.896 SGL Bit Bucket Descriptor: Not Supported 00:19:07.896 SGL Metadata Pointer: Not Supported 00:19:07.896 Oversized SGL: Not Supported 00:19:07.896 SGL Metadata Address: Not Supported 00:19:07.896 SGL Offset: Not Supported 00:19:07.896 Transport SGL Data Block: Not Supported 00:19:07.896 Replay Protected Memory Block: Not Supported 00:19:07.896 00:19:07.896 Firmware Slot Information 00:19:07.896 ========================= 00:19:07.896 Active slot: 1 00:19:07.896 Slot 1 Firmware Revision: 1.0 00:19:07.896 00:19:07.896 00:19:07.896 Commands Supported and Effects 00:19:07.896 ============================== 00:19:07.896 Admin Commands 00:19:07.896 -------------- 00:19:07.896 Delete I/O Submission Queue (00h): Supported 00:19:07.896 Create I/O Submission Queue (01h): Supported 00:19:07.896 Get Log Page (02h): Supported 00:19:07.896 Delete I/O Completion Queue (04h): Supported 00:19:07.896 Create I/O Completion Queue (05h): Supported 00:19:07.896 Identify (06h): Supported 00:19:07.896 Abort (08h): Supported 00:19:07.896 Set Features (09h): Supported 00:19:07.896 Get Features (0Ah): Supported 00:19:07.896 Asynchronous Event Request (0Ch): Supported 00:19:07.896 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:07.896 Directive Send (19h): Supported 00:19:07.896 Directive Receive (1Ah): Supported 00:19:07.896 Virtualization Management (1Ch): Supported 00:19:07.896 Doorbell Buffer Config (7Ch): Supported 00:19:07.896 Format NVM (80h): Supported LBA-Change 00:19:07.896 I/O Commands 00:19:07.896 ------------ 00:19:07.896 Flush (00h): Supported LBA-Change 00:19:07.896 Write (01h): Supported LBA-Change 00:19:07.896 Read (02h): Supported 00:19:07.896 Compare (05h): Supported 00:19:07.896 Write Zeroes (08h): Supported LBA-Change 00:19:07.896 Dataset Management (09h): Supported LBA-Change 00:19:07.896 Unknown (0Ch): Supported 00:19:07.896 Unknown (12h): Supported 00:19:07.896 Copy (19h): Supported LBA-Change 00:19:07.896 Unknown (1Dh): Supported LBA-Change 00:19:07.896 00:19:07.896 Error Log 00:19:07.896 ========= 00:19:07.896 00:19:07.896 Arbitration 00:19:07.896 =========== 00:19:07.896 Arbitration Burst: no limit 00:19:07.896 00:19:07.896 Power Management 00:19:07.896 ================ 00:19:07.896 Number of Power States: 1 00:19:07.896 Current Power State: Power State #0 00:19:07.896 Power State #0: 00:19:07.896 Max Power: 25.00 W 00:19:07.896 Non-Operational State: Operational 00:19:07.896 Entry Latency: 16 microseconds 00:19:07.896 Exit Latency: 4 microseconds 00:19:07.896 Relative Read Throughput: 0 00:19:07.896 Relative Read Latency: 0 00:19:07.896 Relative Write Throughput: 0 00:19:07.896 Relative Write Latency: 0 00:19:07.896 Idle Power: Not Reported 00:19:07.896 Active Power: Not Reported 00:19:07.896 Non-Operational Permissive Mode: Not Supported 00:19:07.896 00:19:07.896 Health Information 00:19:07.896 ================== 00:19:07.896 Critical Warnings: 00:19:07.896 Available Spare Space: OK 00:19:07.896 Temperature: OK 00:19:07.896 Device Reliability: OK 00:19:07.896 Read Only: No 00:19:07.896 Volatile Memory Backup: OK 00:19:07.896 Current Temperature: 323 Kelvin (50 Celsius) 00:19:07.896 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:07.896 Available Spare: 0% 00:19:07.896 Available Spare Threshold: 0% 00:19:07.896 Life Percentage Used: 0% 00:19:07.896 Data Units Read: 2076 00:19:07.896 Data Units Written: 1863 00:19:07.896 Host Read Commands: 99678 00:19:07.896 Host Write Commands: 97947 00:19:07.896 Controller Busy Time: 0 minutes 00:19:07.896 Power Cycles: 0 00:19:07.896 Power On Hours: 0 hours 00:19:07.896 Unsafe Shutdowns: 0 00:19:07.896 Unrecoverable Media Errors: 0 00:19:07.896 Lifetime Error Log Entries: 0 00:19:07.896 Warning Temperature Time: 0 minutes 00:19:07.896 Critical Temperature Time: 0 minutes 00:19:07.896 00:19:07.896 Number of Queues 00:19:07.896 ================ 00:19:07.896 Number of I/O Submission Queues: 64 00:19:07.896 Number of I/O Completion Queues: 64 00:19:07.896 00:19:07.896 ZNS Specific Controller Data 00:19:07.896 ============================ 00:19:07.896 Zone Append Size Limit: 0 00:19:07.896 00:19:07.896 00:19:07.896 Active Namespaces 00:19:07.896 ================= 00:19:07.896 Namespace ID:1 00:19:07.896 Error Recovery Timeout: Unlimited 00:19:07.896 Command Set Identifier: NVM (00h) 00:19:07.896 Deallocate: Supported 00:19:07.896 Deallocated/Unwritten Error: Supported 00:19:07.896 Deallocated Read Value: All 0x00 00:19:07.896 Deallocate in Write Zeroes: Not Supported 00:19:07.896 Deallocated Guard Field: 0xFFFF 00:19:07.896 Flush: Supported 00:19:07.896 Reservation: Not Supported 00:19:07.896 Namespace Sharing Capabilities: Private 00:19:07.896 Size (in LBAs): 1048576 (4GiB) 00:19:07.896 Capacity (in LBAs): 1048576 (4GiB) 00:19:07.896 Utilization (in LBAs): 1048576 (4GiB) 00:19:07.896 Thin Provisioning: Not Supported 00:19:07.896 Per-NS Atomic Units: No 00:19:07.896 Maximum Single Source Range Length: 128 00:19:07.896 Maximum Copy Length: 128 00:19:07.896 Maximum Source Range Count: 128 00:19:07.896 NGUID/EUI64 Never Reused: No 00:19:07.896 Namespace Write Protected: No 00:19:07.896 Number of LBA Formats: 8 00:19:07.896 Current LBA Format: LBA Format #04 00:19:07.896 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:07.896 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:07.896 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:07.896 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:07.896 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:07.896 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:07.896 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:07.896 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:07.896 00:19:07.896 NVM Specific Namespace Data 00:19:07.896 =========================== 00:19:07.896 Logical Block Storage Tag Mask: 0 00:19:07.896 Protection Information Capabilities: 00:19:07.896 16b Guard Protection Information Storage Tag Support: No 00:19:07.896 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:07.896 Storage Tag Check Read Support: No 00:19:07.896 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.896 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.896 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.896 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.896 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.896 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.896 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.896 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.896 Namespace ID:2 00:19:07.896 Error Recovery Timeout: Unlimited 00:19:07.896 Command Set Identifier: NVM (00h) 00:19:07.896 Deallocate: Supported 00:19:07.897 Deallocated/Unwritten Error: Supported 00:19:07.897 Deallocated Read Value: All 0x00 00:19:07.897 Deallocate in Write Zeroes: Not Supported 00:19:07.897 Deallocated Guard Field: 0xFFFF 00:19:07.897 Flush: Supported 00:19:07.897 Reservation: Not Supported 00:19:07.897 Namespace Sharing Capabilities: Private 00:19:07.897 Size (in LBAs): 1048576 (4GiB) 00:19:07.897 Capacity (in LBAs): 1048576 (4GiB) 00:19:07.897 Utilization (in LBAs): 1048576 (4GiB) 00:19:07.897 Thin Provisioning: Not Supported 00:19:07.897 Per-NS Atomic Units: No 00:19:07.897 Maximum Single Source Range Length: 128 00:19:07.897 Maximum Copy Length: 128 00:19:07.897 Maximum Source Range Count: 128 00:19:07.897 NGUID/EUI64 Never Reused: No 00:19:07.897 Namespace Write Protected: No 00:19:07.897 Number of LBA Formats: 8 00:19:07.897 Current LBA Format: LBA Format #04 00:19:07.897 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:07.897 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:07.897 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:07.897 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:07.897 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:07.897 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:07.897 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:07.897 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:07.897 00:19:07.897 NVM Specific Namespace Data 00:19:07.897 =========================== 00:19:07.897 Logical Block Storage Tag Mask: 0 00:19:07.897 Protection Information Capabilities: 00:19:07.897 16b Guard Protection Information Storage Tag Support: No 00:19:07.897 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:07.897 Storage Tag Check Read Support: No 00:19:07.897 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.897 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.897 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.897 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.897 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.897 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.897 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.897 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.897 Namespace ID:3 00:19:07.897 Error Recovery Timeout: Unlimited 00:19:07.897 Command Set Identifier: NVM (00h) 00:19:07.897 Deallocate: Supported 00:19:07.897 Deallocated/Unwritten Error: Supported 00:19:07.897 Deallocated Read Value: All 0x00 00:19:07.897 Deallocate in Write Zeroes: Not Supported 00:19:07.897 Deallocated Guard Field: 0xFFFF 00:19:07.897 Flush: Supported 00:19:07.897 Reservation: Not Supported 00:19:07.897 Namespace Sharing Capabilities: Private 00:19:07.897 Size (in LBAs): 1048576 (4GiB) 00:19:07.897 Capacity (in LBAs): 1048576 (4GiB) 00:19:07.897 Utilization (in LBAs): 1048576 (4GiB) 00:19:07.897 Thin Provisioning: Not Supported 00:19:07.897 Per-NS Atomic Units: No 00:19:07.897 Maximum Single Source Range Length: 128 00:19:07.897 Maximum Copy Length: 128 00:19:07.897 Maximum Source Range Count: 128 00:19:07.897 NGUID/EUI64 Never Reused: No 00:19:07.897 Namespace Write Protected: No 00:19:07.897 Number of LBA Formats: 8 00:19:07.897 Current LBA Format: LBA Format #04 00:19:07.897 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:07.897 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:07.897 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:07.897 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:07.897 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:07.897 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:07.897 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:07.897 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:07.897 00:19:07.897 NVM Specific Namespace Data 00:19:07.897 =========================== 00:19:07.897 Logical Block Storage Tag Mask: 0 00:19:07.897 Protection Information Capabilities: 00:19:07.897 16b Guard Protection Information Storage Tag Support: No 00:19:07.897 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:07.897 Storage Tag Check Read Support: No 00:19:07.897 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.897 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.897 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.897 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.897 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.897 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.897 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.897 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:07.897 11:33:13 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:19:07.897 11:33:13 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:19:08.465 ===================================================== 00:19:08.465 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:08.465 ===================================================== 00:19:08.465 Controller Capabilities/Features 00:19:08.465 ================================ 00:19:08.465 Vendor ID: 1b36 00:19:08.465 Subsystem Vendor ID: 1af4 00:19:08.465 Serial Number: 12340 00:19:08.465 Model Number: QEMU NVMe Ctrl 00:19:08.465 Firmware Version: 8.0.0 00:19:08.465 Recommended Arb Burst: 6 00:19:08.465 IEEE OUI Identifier: 00 54 52 00:19:08.465 Multi-path I/O 00:19:08.465 May have multiple subsystem ports: No 00:19:08.465 May have multiple controllers: No 00:19:08.465 Associated with SR-IOV VF: No 00:19:08.465 Max Data Transfer Size: 524288 00:19:08.465 Max Number of Namespaces: 256 00:19:08.465 Max Number of I/O Queues: 64 00:19:08.465 NVMe Specification Version (VS): 1.4 00:19:08.465 NVMe Specification Version (Identify): 1.4 00:19:08.465 Maximum Queue Entries: 2048 00:19:08.465 Contiguous Queues Required: Yes 00:19:08.465 Arbitration Mechanisms Supported 00:19:08.465 Weighted Round Robin: Not Supported 00:19:08.465 Vendor Specific: Not Supported 00:19:08.465 Reset Timeout: 7500 ms 00:19:08.465 Doorbell Stride: 4 bytes 00:19:08.465 NVM Subsystem Reset: Not Supported 00:19:08.465 Command Sets Supported 00:19:08.465 NVM Command Set: Supported 00:19:08.466 Boot Partition: Not Supported 00:19:08.466 Memory Page Size Minimum: 4096 bytes 00:19:08.466 Memory Page Size Maximum: 65536 bytes 00:19:08.466 Persistent Memory Region: Not Supported 00:19:08.466 Optional Asynchronous Events Supported 00:19:08.466 Namespace Attribute Notices: Supported 00:19:08.466 Firmware Activation Notices: Not Supported 00:19:08.466 ANA Change Notices: Not Supported 00:19:08.466 PLE Aggregate Log Change Notices: Not Supported 00:19:08.466 LBA Status Info Alert Notices: Not Supported 00:19:08.466 EGE Aggregate Log Change Notices: Not Supported 00:19:08.466 Normal NVM Subsystem Shutdown event: Not Supported 00:19:08.466 Zone Descriptor Change Notices: Not Supported 00:19:08.466 Discovery Log Change Notices: Not Supported 00:19:08.466 Controller Attributes 00:19:08.466 128-bit Host Identifier: Not Supported 00:19:08.466 Non-Operational Permissive Mode: Not Supported 00:19:08.466 NVM Sets: Not Supported 00:19:08.466 Read Recovery Levels: Not Supported 00:19:08.466 Endurance Groups: Not Supported 00:19:08.466 Predictable Latency Mode: Not Supported 00:19:08.466 Traffic Based Keep ALive: Not Supported 00:19:08.466 Namespace Granularity: Not Supported 00:19:08.466 SQ Associations: Not Supported 00:19:08.466 UUID List: Not Supported 00:19:08.466 Multi-Domain Subsystem: Not Supported 00:19:08.466 Fixed Capacity Management: Not Supported 00:19:08.466 Variable Capacity Management: Not Supported 00:19:08.466 Delete Endurance Group: Not Supported 00:19:08.466 Delete NVM Set: Not Supported 00:19:08.466 Extended LBA Formats Supported: Supported 00:19:08.466 Flexible Data Placement Supported: Not Supported 00:19:08.466 00:19:08.466 Controller Memory Buffer Support 00:19:08.466 ================================ 00:19:08.466 Supported: No 00:19:08.466 00:19:08.466 Persistent Memory Region Support 00:19:08.466 ================================ 00:19:08.466 Supported: No 00:19:08.466 00:19:08.466 Admin Command Set Attributes 00:19:08.466 ============================ 00:19:08.466 Security Send/Receive: Not Supported 00:19:08.466 Format NVM: Supported 00:19:08.466 Firmware Activate/Download: Not Supported 00:19:08.466 Namespace Management: Supported 00:19:08.466 Device Self-Test: Not Supported 00:19:08.466 Directives: Supported 00:19:08.466 NVMe-MI: Not Supported 00:19:08.466 Virtualization Management: Not Supported 00:19:08.466 Doorbell Buffer Config: Supported 00:19:08.466 Get LBA Status Capability: Not Supported 00:19:08.466 Command & Feature Lockdown Capability: Not Supported 00:19:08.466 Abort Command Limit: 4 00:19:08.466 Async Event Request Limit: 4 00:19:08.466 Number of Firmware Slots: N/A 00:19:08.466 Firmware Slot 1 Read-Only: N/A 00:19:08.466 Firmware Activation Without Reset: N/A 00:19:08.466 Multiple Update Detection Support: N/A 00:19:08.466 Firmware Update Granularity: No Information Provided 00:19:08.466 Per-Namespace SMART Log: Yes 00:19:08.466 Asymmetric Namespace Access Log Page: Not Supported 00:19:08.466 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:19:08.466 Command Effects Log Page: Supported 00:19:08.466 Get Log Page Extended Data: Supported 00:19:08.466 Telemetry Log Pages: Not Supported 00:19:08.466 Persistent Event Log Pages: Not Supported 00:19:08.466 Supported Log Pages Log Page: May Support 00:19:08.466 Commands Supported & Effects Log Page: Not Supported 00:19:08.466 Feature Identifiers & Effects Log Page:May Support 00:19:08.466 NVMe-MI Commands & Effects Log Page: May Support 00:19:08.466 Data Area 4 for Telemetry Log: Not Supported 00:19:08.466 Error Log Page Entries Supported: 1 00:19:08.466 Keep Alive: Not Supported 00:19:08.466 00:19:08.466 NVM Command Set Attributes 00:19:08.466 ========================== 00:19:08.466 Submission Queue Entry Size 00:19:08.466 Max: 64 00:19:08.466 Min: 64 00:19:08.466 Completion Queue Entry Size 00:19:08.466 Max: 16 00:19:08.466 Min: 16 00:19:08.466 Number of Namespaces: 256 00:19:08.466 Compare Command: Supported 00:19:08.466 Write Uncorrectable Command: Not Supported 00:19:08.466 Dataset Management Command: Supported 00:19:08.466 Write Zeroes Command: Supported 00:19:08.466 Set Features Save Field: Supported 00:19:08.466 Reservations: Not Supported 00:19:08.466 Timestamp: Supported 00:19:08.466 Copy: Supported 00:19:08.466 Volatile Write Cache: Present 00:19:08.466 Atomic Write Unit (Normal): 1 00:19:08.466 Atomic Write Unit (PFail): 1 00:19:08.466 Atomic Compare & Write Unit: 1 00:19:08.466 Fused Compare & Write: Not Supported 00:19:08.466 Scatter-Gather List 00:19:08.466 SGL Command Set: Supported 00:19:08.466 SGL Keyed: Not Supported 00:19:08.466 SGL Bit Bucket Descriptor: Not Supported 00:19:08.466 SGL Metadata Pointer: Not Supported 00:19:08.466 Oversized SGL: Not Supported 00:19:08.466 SGL Metadata Address: Not Supported 00:19:08.466 SGL Offset: Not Supported 00:19:08.466 Transport SGL Data Block: Not Supported 00:19:08.466 Replay Protected Memory Block: Not Supported 00:19:08.466 00:19:08.466 Firmware Slot Information 00:19:08.466 ========================= 00:19:08.466 Active slot: 1 00:19:08.466 Slot 1 Firmware Revision: 1.0 00:19:08.466 00:19:08.466 00:19:08.466 Commands Supported and Effects 00:19:08.466 ============================== 00:19:08.466 Admin Commands 00:19:08.466 -------------- 00:19:08.466 Delete I/O Submission Queue (00h): Supported 00:19:08.466 Create I/O Submission Queue (01h): Supported 00:19:08.466 Get Log Page (02h): Supported 00:19:08.466 Delete I/O Completion Queue (04h): Supported 00:19:08.466 Create I/O Completion Queue (05h): Supported 00:19:08.466 Identify (06h): Supported 00:19:08.466 Abort (08h): Supported 00:19:08.466 Set Features (09h): Supported 00:19:08.466 Get Features (0Ah): Supported 00:19:08.466 Asynchronous Event Request (0Ch): Supported 00:19:08.466 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:08.466 Directive Send (19h): Supported 00:19:08.466 Directive Receive (1Ah): Supported 00:19:08.466 Virtualization Management (1Ch): Supported 00:19:08.466 Doorbell Buffer Config (7Ch): Supported 00:19:08.466 Format NVM (80h): Supported LBA-Change 00:19:08.466 I/O Commands 00:19:08.466 ------------ 00:19:08.466 Flush (00h): Supported LBA-Change 00:19:08.466 Write (01h): Supported LBA-Change 00:19:08.466 Read (02h): Supported 00:19:08.466 Compare (05h): Supported 00:19:08.466 Write Zeroes (08h): Supported LBA-Change 00:19:08.466 Dataset Management (09h): Supported LBA-Change 00:19:08.466 Unknown (0Ch): Supported 00:19:08.466 Unknown (12h): Supported 00:19:08.466 Copy (19h): Supported LBA-Change 00:19:08.466 Unknown (1Dh): Supported LBA-Change 00:19:08.466 00:19:08.466 Error Log 00:19:08.466 ========= 00:19:08.466 00:19:08.466 Arbitration 00:19:08.466 =========== 00:19:08.466 Arbitration Burst: no limit 00:19:08.466 00:19:08.466 Power Management 00:19:08.466 ================ 00:19:08.466 Number of Power States: 1 00:19:08.466 Current Power State: Power State #0 00:19:08.466 Power State #0: 00:19:08.466 Max Power: 25.00 W 00:19:08.466 Non-Operational State: Operational 00:19:08.466 Entry Latency: 16 microseconds 00:19:08.466 Exit Latency: 4 microseconds 00:19:08.466 Relative Read Throughput: 0 00:19:08.466 Relative Read Latency: 0 00:19:08.466 Relative Write Throughput: 0 00:19:08.466 Relative Write Latency: 0 00:19:08.466 Idle Power: Not Reported 00:19:08.466 Active Power: Not Reported 00:19:08.466 Non-Operational Permissive Mode: Not Supported 00:19:08.466 00:19:08.466 Health Information 00:19:08.466 ================== 00:19:08.467 Critical Warnings: 00:19:08.467 Available Spare Space: OK 00:19:08.467 Temperature: OK 00:19:08.467 Device Reliability: OK 00:19:08.467 Read Only: No 00:19:08.467 Volatile Memory Backup: OK 00:19:08.467 Current Temperature: 323 Kelvin (50 Celsius) 00:19:08.467 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:08.467 Available Spare: 0% 00:19:08.467 Available Spare Threshold: 0% 00:19:08.467 Life Percentage Used: 0% 00:19:08.467 Data Units Read: 649 00:19:08.467 Data Units Written: 577 00:19:08.467 Host Read Commands: 32570 00:19:08.467 Host Write Commands: 32356 00:19:08.467 Controller Busy Time: 0 minutes 00:19:08.467 Power Cycles: 0 00:19:08.467 Power On Hours: 0 hours 00:19:08.467 Unsafe Shutdowns: 0 00:19:08.467 Unrecoverable Media Errors: 0 00:19:08.467 Lifetime Error Log Entries: 0 00:19:08.467 Warning Temperature Time: 0 minutes 00:19:08.467 Critical Temperature Time: 0 minutes 00:19:08.467 00:19:08.467 Number of Queues 00:19:08.467 ================ 00:19:08.467 Number of I/O Submission Queues: 64 00:19:08.467 Number of I/O Completion Queues: 64 00:19:08.467 00:19:08.467 ZNS Specific Controller Data 00:19:08.467 ============================ 00:19:08.467 Zone Append Size Limit: 0 00:19:08.467 00:19:08.467 00:19:08.467 Active Namespaces 00:19:08.467 ================= 00:19:08.467 Namespace ID:1 00:19:08.467 Error Recovery Timeout: Unlimited 00:19:08.467 Command Set Identifier: NVM (00h) 00:19:08.467 Deallocate: Supported 00:19:08.467 Deallocated/Unwritten Error: Supported 00:19:08.467 Deallocated Read Value: All 0x00 00:19:08.467 Deallocate in Write Zeroes: Not Supported 00:19:08.467 Deallocated Guard Field: 0xFFFF 00:19:08.467 Flush: Supported 00:19:08.467 Reservation: Not Supported 00:19:08.467 Metadata Transferred as: Separate Metadata Buffer 00:19:08.467 Namespace Sharing Capabilities: Private 00:19:08.467 Size (in LBAs): 1548666 (5GiB) 00:19:08.467 Capacity (in LBAs): 1548666 (5GiB) 00:19:08.467 Utilization (in LBAs): 1548666 (5GiB) 00:19:08.467 Thin Provisioning: Not Supported 00:19:08.467 Per-NS Atomic Units: No 00:19:08.467 Maximum Single Source Range Length: 128 00:19:08.467 Maximum Copy Length: 128 00:19:08.467 Maximum Source Range Count: 128 00:19:08.467 NGUID/EUI64 Never Reused: No 00:19:08.467 Namespace Write Protected: No 00:19:08.467 Number of LBA Formats: 8 00:19:08.467 Current LBA Format: LBA Format #07 00:19:08.467 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:08.467 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:08.467 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:08.467 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:08.467 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:08.467 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:08.467 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:08.467 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:08.467 00:19:08.467 NVM Specific Namespace Data 00:19:08.467 =========================== 00:19:08.467 Logical Block Storage Tag Mask: 0 00:19:08.467 Protection Information Capabilities: 00:19:08.467 16b Guard Protection Information Storage Tag Support: No 00:19:08.467 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:08.467 Storage Tag Check Read Support: No 00:19:08.467 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.467 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.467 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.467 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.467 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.467 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.467 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.467 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.467 11:33:13 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:19:08.467 11:33:13 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:19:08.725 ===================================================== 00:19:08.725 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:19:08.725 ===================================================== 00:19:08.725 Controller Capabilities/Features 00:19:08.725 ================================ 00:19:08.725 Vendor ID: 1b36 00:19:08.725 Subsystem Vendor ID: 1af4 00:19:08.725 Serial Number: 12341 00:19:08.725 Model Number: QEMU NVMe Ctrl 00:19:08.725 Firmware Version: 8.0.0 00:19:08.725 Recommended Arb Burst: 6 00:19:08.725 IEEE OUI Identifier: 00 54 52 00:19:08.725 Multi-path I/O 00:19:08.725 May have multiple subsystem ports: No 00:19:08.725 May have multiple controllers: No 00:19:08.725 Associated with SR-IOV VF: No 00:19:08.725 Max Data Transfer Size: 524288 00:19:08.725 Max Number of Namespaces: 256 00:19:08.725 Max Number of I/O Queues: 64 00:19:08.725 NVMe Specification Version (VS): 1.4 00:19:08.725 NVMe Specification Version (Identify): 1.4 00:19:08.725 Maximum Queue Entries: 2048 00:19:08.725 Contiguous Queues Required: Yes 00:19:08.725 Arbitration Mechanisms Supported 00:19:08.725 Weighted Round Robin: Not Supported 00:19:08.725 Vendor Specific: Not Supported 00:19:08.725 Reset Timeout: 7500 ms 00:19:08.725 Doorbell Stride: 4 bytes 00:19:08.725 NVM Subsystem Reset: Not Supported 00:19:08.725 Command Sets Supported 00:19:08.725 NVM Command Set: Supported 00:19:08.725 Boot Partition: Not Supported 00:19:08.725 Memory Page Size Minimum: 4096 bytes 00:19:08.725 Memory Page Size Maximum: 65536 bytes 00:19:08.725 Persistent Memory Region: Not Supported 00:19:08.725 Optional Asynchronous Events Supported 00:19:08.725 Namespace Attribute Notices: Supported 00:19:08.725 Firmware Activation Notices: Not Supported 00:19:08.725 ANA Change Notices: Not Supported 00:19:08.725 PLE Aggregate Log Change Notices: Not Supported 00:19:08.725 LBA Status Info Alert Notices: Not Supported 00:19:08.725 EGE Aggregate Log Change Notices: Not Supported 00:19:08.725 Normal NVM Subsystem Shutdown event: Not Supported 00:19:08.725 Zone Descriptor Change Notices: Not Supported 00:19:08.725 Discovery Log Change Notices: Not Supported 00:19:08.725 Controller Attributes 00:19:08.725 128-bit Host Identifier: Not Supported 00:19:08.725 Non-Operational Permissive Mode: Not Supported 00:19:08.725 NVM Sets: Not Supported 00:19:08.725 Read Recovery Levels: Not Supported 00:19:08.725 Endurance Groups: Not Supported 00:19:08.725 Predictable Latency Mode: Not Supported 00:19:08.725 Traffic Based Keep ALive: Not Supported 00:19:08.725 Namespace Granularity: Not Supported 00:19:08.725 SQ Associations: Not Supported 00:19:08.725 UUID List: Not Supported 00:19:08.725 Multi-Domain Subsystem: Not Supported 00:19:08.725 Fixed Capacity Management: Not Supported 00:19:08.725 Variable Capacity Management: Not Supported 00:19:08.725 Delete Endurance Group: Not Supported 00:19:08.725 Delete NVM Set: Not Supported 00:19:08.725 Extended LBA Formats Supported: Supported 00:19:08.725 Flexible Data Placement Supported: Not Supported 00:19:08.725 00:19:08.725 Controller Memory Buffer Support 00:19:08.725 ================================ 00:19:08.725 Supported: No 00:19:08.725 00:19:08.725 Persistent Memory Region Support 00:19:08.725 ================================ 00:19:08.725 Supported: No 00:19:08.725 00:19:08.725 Admin Command Set Attributes 00:19:08.725 ============================ 00:19:08.725 Security Send/Receive: Not Supported 00:19:08.725 Format NVM: Supported 00:19:08.725 Firmware Activate/Download: Not Supported 00:19:08.725 Namespace Management: Supported 00:19:08.725 Device Self-Test: Not Supported 00:19:08.725 Directives: Supported 00:19:08.725 NVMe-MI: Not Supported 00:19:08.725 Virtualization Management: Not Supported 00:19:08.725 Doorbell Buffer Config: Supported 00:19:08.725 Get LBA Status Capability: Not Supported 00:19:08.725 Command & Feature Lockdown Capability: Not Supported 00:19:08.725 Abort Command Limit: 4 00:19:08.726 Async Event Request Limit: 4 00:19:08.726 Number of Firmware Slots: N/A 00:19:08.726 Firmware Slot 1 Read-Only: N/A 00:19:08.726 Firmware Activation Without Reset: N/A 00:19:08.726 Multiple Update Detection Support: N/A 00:19:08.726 Firmware Update Granularity: No Information Provided 00:19:08.726 Per-Namespace SMART Log: Yes 00:19:08.726 Asymmetric Namespace Access Log Page: Not Supported 00:19:08.726 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:19:08.726 Command Effects Log Page: Supported 00:19:08.726 Get Log Page Extended Data: Supported 00:19:08.726 Telemetry Log Pages: Not Supported 00:19:08.726 Persistent Event Log Pages: Not Supported 00:19:08.726 Supported Log Pages Log Page: May Support 00:19:08.726 Commands Supported & Effects Log Page: Not Supported 00:19:08.726 Feature Identifiers & Effects Log Page:May Support 00:19:08.726 NVMe-MI Commands & Effects Log Page: May Support 00:19:08.726 Data Area 4 for Telemetry Log: Not Supported 00:19:08.726 Error Log Page Entries Supported: 1 00:19:08.726 Keep Alive: Not Supported 00:19:08.726 00:19:08.726 NVM Command Set Attributes 00:19:08.726 ========================== 00:19:08.726 Submission Queue Entry Size 00:19:08.726 Max: 64 00:19:08.726 Min: 64 00:19:08.726 Completion Queue Entry Size 00:19:08.726 Max: 16 00:19:08.726 Min: 16 00:19:08.726 Number of Namespaces: 256 00:19:08.726 Compare Command: Supported 00:19:08.726 Write Uncorrectable Command: Not Supported 00:19:08.726 Dataset Management Command: Supported 00:19:08.726 Write Zeroes Command: Supported 00:19:08.726 Set Features Save Field: Supported 00:19:08.726 Reservations: Not Supported 00:19:08.726 Timestamp: Supported 00:19:08.726 Copy: Supported 00:19:08.726 Volatile Write Cache: Present 00:19:08.726 Atomic Write Unit (Normal): 1 00:19:08.726 Atomic Write Unit (PFail): 1 00:19:08.726 Atomic Compare & Write Unit: 1 00:19:08.726 Fused Compare & Write: Not Supported 00:19:08.726 Scatter-Gather List 00:19:08.726 SGL Command Set: Supported 00:19:08.726 SGL Keyed: Not Supported 00:19:08.726 SGL Bit Bucket Descriptor: Not Supported 00:19:08.726 SGL Metadata Pointer: Not Supported 00:19:08.726 Oversized SGL: Not Supported 00:19:08.726 SGL Metadata Address: Not Supported 00:19:08.726 SGL Offset: Not Supported 00:19:08.726 Transport SGL Data Block: Not Supported 00:19:08.726 Replay Protected Memory Block: Not Supported 00:19:08.726 00:19:08.726 Firmware Slot Information 00:19:08.726 ========================= 00:19:08.726 Active slot: 1 00:19:08.726 Slot 1 Firmware Revision: 1.0 00:19:08.726 00:19:08.726 00:19:08.726 Commands Supported and Effects 00:19:08.726 ============================== 00:19:08.726 Admin Commands 00:19:08.726 -------------- 00:19:08.726 Delete I/O Submission Queue (00h): Supported 00:19:08.726 Create I/O Submission Queue (01h): Supported 00:19:08.726 Get Log Page (02h): Supported 00:19:08.726 Delete I/O Completion Queue (04h): Supported 00:19:08.726 Create I/O Completion Queue (05h): Supported 00:19:08.726 Identify (06h): Supported 00:19:08.726 Abort (08h): Supported 00:19:08.726 Set Features (09h): Supported 00:19:08.726 Get Features (0Ah): Supported 00:19:08.726 Asynchronous Event Request (0Ch): Supported 00:19:08.726 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:08.726 Directive Send (19h): Supported 00:19:08.726 Directive Receive (1Ah): Supported 00:19:08.726 Virtualization Management (1Ch): Supported 00:19:08.726 Doorbell Buffer Config (7Ch): Supported 00:19:08.726 Format NVM (80h): Supported LBA-Change 00:19:08.726 I/O Commands 00:19:08.726 ------------ 00:19:08.726 Flush (00h): Supported LBA-Change 00:19:08.726 Write (01h): Supported LBA-Change 00:19:08.726 Read (02h): Supported 00:19:08.726 Compare (05h): Supported 00:19:08.726 Write Zeroes (08h): Supported LBA-Change 00:19:08.726 Dataset Management (09h): Supported LBA-Change 00:19:08.726 Unknown (0Ch): Supported 00:19:08.726 Unknown (12h): Supported 00:19:08.726 Copy (19h): Supported LBA-Change 00:19:08.726 Unknown (1Dh): Supported LBA-Change 00:19:08.726 00:19:08.726 Error Log 00:19:08.726 ========= 00:19:08.726 00:19:08.726 Arbitration 00:19:08.726 =========== 00:19:08.726 Arbitration Burst: no limit 00:19:08.726 00:19:08.726 Power Management 00:19:08.726 ================ 00:19:08.726 Number of Power States: 1 00:19:08.726 Current Power State: Power State #0 00:19:08.726 Power State #0: 00:19:08.726 Max Power: 25.00 W 00:19:08.726 Non-Operational State: Operational 00:19:08.726 Entry Latency: 16 microseconds 00:19:08.726 Exit Latency: 4 microseconds 00:19:08.726 Relative Read Throughput: 0 00:19:08.726 Relative Read Latency: 0 00:19:08.726 Relative Write Throughput: 0 00:19:08.726 Relative Write Latency: 0 00:19:08.726 Idle Power: Not Reported 00:19:08.726 Active Power: Not Reported 00:19:08.726 Non-Operational Permissive Mode: Not Supported 00:19:08.726 00:19:08.726 Health Information 00:19:08.726 ================== 00:19:08.726 Critical Warnings: 00:19:08.726 Available Spare Space: OK 00:19:08.726 Temperature: OK 00:19:08.726 Device Reliability: OK 00:19:08.726 Read Only: No 00:19:08.726 Volatile Memory Backup: OK 00:19:08.726 Current Temperature: 323 Kelvin (50 Celsius) 00:19:08.726 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:08.726 Available Spare: 0% 00:19:08.726 Available Spare Threshold: 0% 00:19:08.726 Life Percentage Used: 0% 00:19:08.726 Data Units Read: 1008 00:19:08.726 Data Units Written: 875 00:19:08.726 Host Read Commands: 48257 00:19:08.726 Host Write Commands: 47045 00:19:08.726 Controller Busy Time: 0 minutes 00:19:08.726 Power Cycles: 0 00:19:08.726 Power On Hours: 0 hours 00:19:08.726 Unsafe Shutdowns: 0 00:19:08.726 Unrecoverable Media Errors: 0 00:19:08.726 Lifetime Error Log Entries: 0 00:19:08.726 Warning Temperature Time: 0 minutes 00:19:08.726 Critical Temperature Time: 0 minutes 00:19:08.726 00:19:08.726 Number of Queues 00:19:08.726 ================ 00:19:08.726 Number of I/O Submission Queues: 64 00:19:08.726 Number of I/O Completion Queues: 64 00:19:08.726 00:19:08.726 ZNS Specific Controller Data 00:19:08.726 ============================ 00:19:08.726 Zone Append Size Limit: 0 00:19:08.726 00:19:08.726 00:19:08.726 Active Namespaces 00:19:08.726 ================= 00:19:08.726 Namespace ID:1 00:19:08.726 Error Recovery Timeout: Unlimited 00:19:08.726 Command Set Identifier: NVM (00h) 00:19:08.726 Deallocate: Supported 00:19:08.726 Deallocated/Unwritten Error: Supported 00:19:08.726 Deallocated Read Value: All 0x00 00:19:08.726 Deallocate in Write Zeroes: Not Supported 00:19:08.726 Deallocated Guard Field: 0xFFFF 00:19:08.726 Flush: Supported 00:19:08.727 Reservation: Not Supported 00:19:08.727 Namespace Sharing Capabilities: Private 00:19:08.727 Size (in LBAs): 1310720 (5GiB) 00:19:08.727 Capacity (in LBAs): 1310720 (5GiB) 00:19:08.727 Utilization (in LBAs): 1310720 (5GiB) 00:19:08.727 Thin Provisioning: Not Supported 00:19:08.727 Per-NS Atomic Units: No 00:19:08.727 Maximum Single Source Range Length: 128 00:19:08.727 Maximum Copy Length: 128 00:19:08.727 Maximum Source Range Count: 128 00:19:08.727 NGUID/EUI64 Never Reused: No 00:19:08.727 Namespace Write Protected: No 00:19:08.727 Number of LBA Formats: 8 00:19:08.727 Current LBA Format: LBA Format #04 00:19:08.727 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:08.727 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:08.727 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:08.727 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:08.727 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:08.727 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:08.727 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:08.727 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:08.727 00:19:08.727 NVM Specific Namespace Data 00:19:08.727 =========================== 00:19:08.727 Logical Block Storage Tag Mask: 0 00:19:08.727 Protection Information Capabilities: 00:19:08.727 16b Guard Protection Information Storage Tag Support: No 00:19:08.727 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:08.727 Storage Tag Check Read Support: No 00:19:08.727 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.727 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.727 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.727 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.727 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.727 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.727 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.727 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.727 11:33:14 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:19:08.727 11:33:14 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:19:08.986 ===================================================== 00:19:08.986 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:19:08.986 ===================================================== 00:19:08.986 Controller Capabilities/Features 00:19:08.986 ================================ 00:19:08.986 Vendor ID: 1b36 00:19:08.986 Subsystem Vendor ID: 1af4 00:19:08.986 Serial Number: 12342 00:19:08.986 Model Number: QEMU NVMe Ctrl 00:19:08.986 Firmware Version: 8.0.0 00:19:08.986 Recommended Arb Burst: 6 00:19:08.986 IEEE OUI Identifier: 00 54 52 00:19:08.986 Multi-path I/O 00:19:08.986 May have multiple subsystem ports: No 00:19:08.986 May have multiple controllers: No 00:19:08.986 Associated with SR-IOV VF: No 00:19:08.986 Max Data Transfer Size: 524288 00:19:08.986 Max Number of Namespaces: 256 00:19:08.986 Max Number of I/O Queues: 64 00:19:08.986 NVMe Specification Version (VS): 1.4 00:19:08.986 NVMe Specification Version (Identify): 1.4 00:19:08.986 Maximum Queue Entries: 2048 00:19:08.986 Contiguous Queues Required: Yes 00:19:08.986 Arbitration Mechanisms Supported 00:19:08.986 Weighted Round Robin: Not Supported 00:19:08.986 Vendor Specific: Not Supported 00:19:08.986 Reset Timeout: 7500 ms 00:19:08.986 Doorbell Stride: 4 bytes 00:19:08.986 NVM Subsystem Reset: Not Supported 00:19:08.986 Command Sets Supported 00:19:08.986 NVM Command Set: Supported 00:19:08.986 Boot Partition: Not Supported 00:19:08.986 Memory Page Size Minimum: 4096 bytes 00:19:08.986 Memory Page Size Maximum: 65536 bytes 00:19:08.987 Persistent Memory Region: Not Supported 00:19:08.987 Optional Asynchronous Events Supported 00:19:08.987 Namespace Attribute Notices: Supported 00:19:08.987 Firmware Activation Notices: Not Supported 00:19:08.987 ANA Change Notices: Not Supported 00:19:08.987 PLE Aggregate Log Change Notices: Not Supported 00:19:08.987 LBA Status Info Alert Notices: Not Supported 00:19:08.987 EGE Aggregate Log Change Notices: Not Supported 00:19:08.987 Normal NVM Subsystem Shutdown event: Not Supported 00:19:08.987 Zone Descriptor Change Notices: Not Supported 00:19:08.987 Discovery Log Change Notices: Not Supported 00:19:08.987 Controller Attributes 00:19:08.987 128-bit Host Identifier: Not Supported 00:19:08.987 Non-Operational Permissive Mode: Not Supported 00:19:08.987 NVM Sets: Not Supported 00:19:08.987 Read Recovery Levels: Not Supported 00:19:08.987 Endurance Groups: Not Supported 00:19:08.987 Predictable Latency Mode: Not Supported 00:19:08.987 Traffic Based Keep ALive: Not Supported 00:19:08.987 Namespace Granularity: Not Supported 00:19:08.987 SQ Associations: Not Supported 00:19:08.987 UUID List: Not Supported 00:19:08.987 Multi-Domain Subsystem: Not Supported 00:19:08.987 Fixed Capacity Management: Not Supported 00:19:08.987 Variable Capacity Management: Not Supported 00:19:08.987 Delete Endurance Group: Not Supported 00:19:08.987 Delete NVM Set: Not Supported 00:19:08.987 Extended LBA Formats Supported: Supported 00:19:08.987 Flexible Data Placement Supported: Not Supported 00:19:08.987 00:19:08.987 Controller Memory Buffer Support 00:19:08.987 ================================ 00:19:08.987 Supported: No 00:19:08.987 00:19:08.987 Persistent Memory Region Support 00:19:08.987 ================================ 00:19:08.987 Supported: No 00:19:08.987 00:19:08.987 Admin Command Set Attributes 00:19:08.987 ============================ 00:19:08.987 Security Send/Receive: Not Supported 00:19:08.987 Format NVM: Supported 00:19:08.987 Firmware Activate/Download: Not Supported 00:19:08.987 Namespace Management: Supported 00:19:08.987 Device Self-Test: Not Supported 00:19:08.987 Directives: Supported 00:19:08.987 NVMe-MI: Not Supported 00:19:08.987 Virtualization Management: Not Supported 00:19:08.987 Doorbell Buffer Config: Supported 00:19:08.987 Get LBA Status Capability: Not Supported 00:19:08.987 Command & Feature Lockdown Capability: Not Supported 00:19:08.987 Abort Command Limit: 4 00:19:08.987 Async Event Request Limit: 4 00:19:08.987 Number of Firmware Slots: N/A 00:19:08.987 Firmware Slot 1 Read-Only: N/A 00:19:08.987 Firmware Activation Without Reset: N/A 00:19:08.987 Multiple Update Detection Support: N/A 00:19:08.987 Firmware Update Granularity: No Information Provided 00:19:08.987 Per-Namespace SMART Log: Yes 00:19:08.987 Asymmetric Namespace Access Log Page: Not Supported 00:19:08.987 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:19:08.987 Command Effects Log Page: Supported 00:19:08.987 Get Log Page Extended Data: Supported 00:19:08.987 Telemetry Log Pages: Not Supported 00:19:08.987 Persistent Event Log Pages: Not Supported 00:19:08.987 Supported Log Pages Log Page: May Support 00:19:08.987 Commands Supported & Effects Log Page: Not Supported 00:19:08.987 Feature Identifiers & Effects Log Page:May Support 00:19:08.987 NVMe-MI Commands & Effects Log Page: May Support 00:19:08.987 Data Area 4 for Telemetry Log: Not Supported 00:19:08.987 Error Log Page Entries Supported: 1 00:19:08.987 Keep Alive: Not Supported 00:19:08.987 00:19:08.987 NVM Command Set Attributes 00:19:08.987 ========================== 00:19:08.987 Submission Queue Entry Size 00:19:08.987 Max: 64 00:19:08.987 Min: 64 00:19:08.987 Completion Queue Entry Size 00:19:08.987 Max: 16 00:19:08.987 Min: 16 00:19:08.987 Number of Namespaces: 256 00:19:08.987 Compare Command: Supported 00:19:08.987 Write Uncorrectable Command: Not Supported 00:19:08.987 Dataset Management Command: Supported 00:19:08.987 Write Zeroes Command: Supported 00:19:08.987 Set Features Save Field: Supported 00:19:08.987 Reservations: Not Supported 00:19:08.987 Timestamp: Supported 00:19:08.987 Copy: Supported 00:19:08.987 Volatile Write Cache: Present 00:19:08.987 Atomic Write Unit (Normal): 1 00:19:08.987 Atomic Write Unit (PFail): 1 00:19:08.987 Atomic Compare & Write Unit: 1 00:19:08.987 Fused Compare & Write: Not Supported 00:19:08.987 Scatter-Gather List 00:19:08.987 SGL Command Set: Supported 00:19:08.987 SGL Keyed: Not Supported 00:19:08.987 SGL Bit Bucket Descriptor: Not Supported 00:19:08.987 SGL Metadata Pointer: Not Supported 00:19:08.987 Oversized SGL: Not Supported 00:19:08.987 SGL Metadata Address: Not Supported 00:19:08.987 SGL Offset: Not Supported 00:19:08.987 Transport SGL Data Block: Not Supported 00:19:08.987 Replay Protected Memory Block: Not Supported 00:19:08.987 00:19:08.987 Firmware Slot Information 00:19:08.987 ========================= 00:19:08.987 Active slot: 1 00:19:08.987 Slot 1 Firmware Revision: 1.0 00:19:08.987 00:19:08.987 00:19:08.987 Commands Supported and Effects 00:19:08.987 ============================== 00:19:08.987 Admin Commands 00:19:08.987 -------------- 00:19:08.987 Delete I/O Submission Queue (00h): Supported 00:19:08.987 Create I/O Submission Queue (01h): Supported 00:19:08.987 Get Log Page (02h): Supported 00:19:08.987 Delete I/O Completion Queue (04h): Supported 00:19:08.987 Create I/O Completion Queue (05h): Supported 00:19:08.987 Identify (06h): Supported 00:19:08.987 Abort (08h): Supported 00:19:08.987 Set Features (09h): Supported 00:19:08.987 Get Features (0Ah): Supported 00:19:08.987 Asynchronous Event Request (0Ch): Supported 00:19:08.987 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:08.987 Directive Send (19h): Supported 00:19:08.987 Directive Receive (1Ah): Supported 00:19:08.987 Virtualization Management (1Ch): Supported 00:19:08.987 Doorbell Buffer Config (7Ch): Supported 00:19:08.987 Format NVM (80h): Supported LBA-Change 00:19:08.987 I/O Commands 00:19:08.987 ------------ 00:19:08.987 Flush (00h): Supported LBA-Change 00:19:08.987 Write (01h): Supported LBA-Change 00:19:08.987 Read (02h): Supported 00:19:08.987 Compare (05h): Supported 00:19:08.987 Write Zeroes (08h): Supported LBA-Change 00:19:08.987 Dataset Management (09h): Supported LBA-Change 00:19:08.987 Unknown (0Ch): Supported 00:19:08.987 Unknown (12h): Supported 00:19:08.987 Copy (19h): Supported LBA-Change 00:19:08.987 Unknown (1Dh): Supported LBA-Change 00:19:08.987 00:19:08.987 Error Log 00:19:08.987 ========= 00:19:08.987 00:19:08.987 Arbitration 00:19:08.987 =========== 00:19:08.987 Arbitration Burst: no limit 00:19:08.987 00:19:08.987 Power Management 00:19:08.987 ================ 00:19:08.987 Number of Power States: 1 00:19:08.987 Current Power State: Power State #0 00:19:08.987 Power State #0: 00:19:08.987 Max Power: 25.00 W 00:19:08.987 Non-Operational State: Operational 00:19:08.987 Entry Latency: 16 microseconds 00:19:08.987 Exit Latency: 4 microseconds 00:19:08.987 Relative Read Throughput: 0 00:19:08.987 Relative Read Latency: 0 00:19:08.987 Relative Write Throughput: 0 00:19:08.987 Relative Write Latency: 0 00:19:08.987 Idle Power: Not Reported 00:19:08.987 Active Power: Not Reported 00:19:08.987 Non-Operational Permissive Mode: Not Supported 00:19:08.987 00:19:08.987 Health Information 00:19:08.987 ================== 00:19:08.987 Critical Warnings: 00:19:08.987 Available Spare Space: OK 00:19:08.987 Temperature: OK 00:19:08.987 Device Reliability: OK 00:19:08.987 Read Only: No 00:19:08.987 Volatile Memory Backup: OK 00:19:08.987 Current Temperature: 323 Kelvin (50 Celsius) 00:19:08.987 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:08.987 Available Spare: 0% 00:19:08.987 Available Spare Threshold: 0% 00:19:08.987 Life Percentage Used: 0% 00:19:08.987 Data Units Read: 2076 00:19:08.987 Data Units Written: 1863 00:19:08.988 Host Read Commands: 99678 00:19:08.988 Host Write Commands: 97947 00:19:08.988 Controller Busy Time: 0 minutes 00:19:08.988 Power Cycles: 0 00:19:08.988 Power On Hours: 0 hours 00:19:08.988 Unsafe Shutdowns: 0 00:19:08.988 Unrecoverable Media Errors: 0 00:19:08.988 Lifetime Error Log Entries: 0 00:19:08.988 Warning Temperature Time: 0 minutes 00:19:08.988 Critical Temperature Time: 0 minutes 00:19:08.988 00:19:08.988 Number of Queues 00:19:08.988 ================ 00:19:08.988 Number of I/O Submission Queues: 64 00:19:08.988 Number of I/O Completion Queues: 64 00:19:08.988 00:19:08.988 ZNS Specific Controller Data 00:19:08.988 ============================ 00:19:08.988 Zone Append Size Limit: 0 00:19:08.988 00:19:08.988 00:19:08.988 Active Namespaces 00:19:08.988 ================= 00:19:08.988 Namespace ID:1 00:19:08.988 Error Recovery Timeout: Unlimited 00:19:08.988 Command Set Identifier: NVM (00h) 00:19:08.988 Deallocate: Supported 00:19:08.988 Deallocated/Unwritten Error: Supported 00:19:08.988 Deallocated Read Value: All 0x00 00:19:08.988 Deallocate in Write Zeroes: Not Supported 00:19:08.988 Deallocated Guard Field: 0xFFFF 00:19:08.988 Flush: Supported 00:19:08.988 Reservation: Not Supported 00:19:08.988 Namespace Sharing Capabilities: Private 00:19:08.988 Size (in LBAs): 1048576 (4GiB) 00:19:08.988 Capacity (in LBAs): 1048576 (4GiB) 00:19:08.988 Utilization (in LBAs): 1048576 (4GiB) 00:19:08.988 Thin Provisioning: Not Supported 00:19:08.988 Per-NS Atomic Units: No 00:19:08.988 Maximum Single Source Range Length: 128 00:19:08.988 Maximum Copy Length: 128 00:19:08.988 Maximum Source Range Count: 128 00:19:08.988 NGUID/EUI64 Never Reused: No 00:19:08.988 Namespace Write Protected: No 00:19:08.988 Number of LBA Formats: 8 00:19:08.988 Current LBA Format: LBA Format #04 00:19:08.988 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:08.988 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:08.988 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:08.988 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:08.988 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:08.988 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:08.988 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:08.988 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:08.988 00:19:08.988 NVM Specific Namespace Data 00:19:08.988 =========================== 00:19:08.988 Logical Block Storage Tag Mask: 0 00:19:08.988 Protection Information Capabilities: 00:19:08.988 16b Guard Protection Information Storage Tag Support: No 00:19:08.988 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:08.988 Storage Tag Check Read Support: No 00:19:08.988 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.988 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.988 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.988 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.988 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.988 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.988 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.988 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.988 Namespace ID:2 00:19:08.988 Error Recovery Timeout: Unlimited 00:19:08.988 Command Set Identifier: NVM (00h) 00:19:08.988 Deallocate: Supported 00:19:08.988 Deallocated/Unwritten Error: Supported 00:19:08.988 Deallocated Read Value: All 0x00 00:19:08.988 Deallocate in Write Zeroes: Not Supported 00:19:08.988 Deallocated Guard Field: 0xFFFF 00:19:08.988 Flush: Supported 00:19:08.988 Reservation: Not Supported 00:19:08.988 Namespace Sharing Capabilities: Private 00:19:08.988 Size (in LBAs): 1048576 (4GiB) 00:19:08.988 Capacity (in LBAs): 1048576 (4GiB) 00:19:08.988 Utilization (in LBAs): 1048576 (4GiB) 00:19:08.988 Thin Provisioning: Not Supported 00:19:08.988 Per-NS Atomic Units: No 00:19:08.988 Maximum Single Source Range Length: 128 00:19:08.988 Maximum Copy Length: 128 00:19:08.988 Maximum Source Range Count: 128 00:19:08.988 NGUID/EUI64 Never Reused: No 00:19:08.988 Namespace Write Protected: No 00:19:08.988 Number of LBA Formats: 8 00:19:08.988 Current LBA Format: LBA Format #04 00:19:08.988 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:08.988 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:08.988 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:08.988 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:08.988 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:08.988 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:08.988 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:08.988 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:08.988 00:19:08.988 NVM Specific Namespace Data 00:19:08.988 =========================== 00:19:08.988 Logical Block Storage Tag Mask: 0 00:19:08.988 Protection Information Capabilities: 00:19:08.988 16b Guard Protection Information Storage Tag Support: No 00:19:08.988 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:08.988 Storage Tag Check Read Support: No 00:19:08.988 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.988 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.988 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.988 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.988 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.988 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.988 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.988 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.988 Namespace ID:3 00:19:08.988 Error Recovery Timeout: Unlimited 00:19:08.988 Command Set Identifier: NVM (00h) 00:19:08.988 Deallocate: Supported 00:19:08.988 Deallocated/Unwritten Error: Supported 00:19:08.988 Deallocated Read Value: All 0x00 00:19:08.988 Deallocate in Write Zeroes: Not Supported 00:19:08.988 Deallocated Guard Field: 0xFFFF 00:19:08.988 Flush: Supported 00:19:08.988 Reservation: Not Supported 00:19:08.988 Namespace Sharing Capabilities: Private 00:19:08.988 Size (in LBAs): 1048576 (4GiB) 00:19:08.988 Capacity (in LBAs): 1048576 (4GiB) 00:19:08.988 Utilization (in LBAs): 1048576 (4GiB) 00:19:08.988 Thin Provisioning: Not Supported 00:19:08.988 Per-NS Atomic Units: No 00:19:08.988 Maximum Single Source Range Length: 128 00:19:08.988 Maximum Copy Length: 128 00:19:08.988 Maximum Source Range Count: 128 00:19:08.988 NGUID/EUI64 Never Reused: No 00:19:08.988 Namespace Write Protected: No 00:19:08.988 Number of LBA Formats: 8 00:19:08.988 Current LBA Format: LBA Format #04 00:19:08.988 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:08.988 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:08.988 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:08.988 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:08.988 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:08.988 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:08.988 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:08.988 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:08.988 00:19:08.988 NVM Specific Namespace Data 00:19:08.988 =========================== 00:19:08.988 Logical Block Storage Tag Mask: 0 00:19:08.989 Protection Information Capabilities: 00:19:08.989 16b Guard Protection Information Storage Tag Support: No 00:19:08.989 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:08.989 Storage Tag Check Read Support: No 00:19:08.989 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.989 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.989 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.989 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.989 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.989 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.989 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.989 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:08.989 11:33:14 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:19:08.989 11:33:14 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:19:09.556 ===================================================== 00:19:09.556 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:19:09.556 ===================================================== 00:19:09.556 Controller Capabilities/Features 00:19:09.556 ================================ 00:19:09.556 Vendor ID: 1b36 00:19:09.556 Subsystem Vendor ID: 1af4 00:19:09.556 Serial Number: 12343 00:19:09.556 Model Number: QEMU NVMe Ctrl 00:19:09.556 Firmware Version: 8.0.0 00:19:09.556 Recommended Arb Burst: 6 00:19:09.556 IEEE OUI Identifier: 00 54 52 00:19:09.556 Multi-path I/O 00:19:09.556 May have multiple subsystem ports: No 00:19:09.556 May have multiple controllers: Yes 00:19:09.556 Associated with SR-IOV VF: No 00:19:09.556 Max Data Transfer Size: 524288 00:19:09.556 Max Number of Namespaces: 256 00:19:09.556 Max Number of I/O Queues: 64 00:19:09.556 NVMe Specification Version (VS): 1.4 00:19:09.557 NVMe Specification Version (Identify): 1.4 00:19:09.557 Maximum Queue Entries: 2048 00:19:09.557 Contiguous Queues Required: Yes 00:19:09.557 Arbitration Mechanisms Supported 00:19:09.557 Weighted Round Robin: Not Supported 00:19:09.557 Vendor Specific: Not Supported 00:19:09.557 Reset Timeout: 7500 ms 00:19:09.557 Doorbell Stride: 4 bytes 00:19:09.557 NVM Subsystem Reset: Not Supported 00:19:09.557 Command Sets Supported 00:19:09.557 NVM Command Set: Supported 00:19:09.557 Boot Partition: Not Supported 00:19:09.557 Memory Page Size Minimum: 4096 bytes 00:19:09.557 Memory Page Size Maximum: 65536 bytes 00:19:09.557 Persistent Memory Region: Not Supported 00:19:09.557 Optional Asynchronous Events Supported 00:19:09.557 Namespace Attribute Notices: Supported 00:19:09.557 Firmware Activation Notices: Not Supported 00:19:09.557 ANA Change Notices: Not Supported 00:19:09.557 PLE Aggregate Log Change Notices: Not Supported 00:19:09.557 LBA Status Info Alert Notices: Not Supported 00:19:09.557 EGE Aggregate Log Change Notices: Not Supported 00:19:09.557 Normal NVM Subsystem Shutdown event: Not Supported 00:19:09.557 Zone Descriptor Change Notices: Not Supported 00:19:09.557 Discovery Log Change Notices: Not Supported 00:19:09.557 Controller Attributes 00:19:09.557 128-bit Host Identifier: Not Supported 00:19:09.557 Non-Operational Permissive Mode: Not Supported 00:19:09.557 NVM Sets: Not Supported 00:19:09.557 Read Recovery Levels: Not Supported 00:19:09.557 Endurance Groups: Supported 00:19:09.557 Predictable Latency Mode: Not Supported 00:19:09.557 Traffic Based Keep ALive: Not Supported 00:19:09.557 Namespace Granularity: Not Supported 00:19:09.557 SQ Associations: Not Supported 00:19:09.557 UUID List: Not Supported 00:19:09.557 Multi-Domain Subsystem: Not Supported 00:19:09.557 Fixed Capacity Management: Not Supported 00:19:09.557 Variable Capacity Management: Not Supported 00:19:09.557 Delete Endurance Group: Not Supported 00:19:09.557 Delete NVM Set: Not Supported 00:19:09.557 Extended LBA Formats Supported: Supported 00:19:09.557 Flexible Data Placement Supported: Supported 00:19:09.557 00:19:09.557 Controller Memory Buffer Support 00:19:09.557 ================================ 00:19:09.557 Supported: No 00:19:09.557 00:19:09.557 Persistent Memory Region Support 00:19:09.557 ================================ 00:19:09.557 Supported: No 00:19:09.557 00:19:09.557 Admin Command Set Attributes 00:19:09.557 ============================ 00:19:09.557 Security Send/Receive: Not Supported 00:19:09.557 Format NVM: Supported 00:19:09.557 Firmware Activate/Download: Not Supported 00:19:09.557 Namespace Management: Supported 00:19:09.557 Device Self-Test: Not Supported 00:19:09.557 Directives: Supported 00:19:09.557 NVMe-MI: Not Supported 00:19:09.557 Virtualization Management: Not Supported 00:19:09.557 Doorbell Buffer Config: Supported 00:19:09.557 Get LBA Status Capability: Not Supported 00:19:09.557 Command & Feature Lockdown Capability: Not Supported 00:19:09.557 Abort Command Limit: 4 00:19:09.557 Async Event Request Limit: 4 00:19:09.557 Number of Firmware Slots: N/A 00:19:09.557 Firmware Slot 1 Read-Only: N/A 00:19:09.557 Firmware Activation Without Reset: N/A 00:19:09.557 Multiple Update Detection Support: N/A 00:19:09.557 Firmware Update Granularity: No Information Provided 00:19:09.557 Per-Namespace SMART Log: Yes 00:19:09.557 Asymmetric Namespace Access Log Page: Not Supported 00:19:09.557 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:19:09.557 Command Effects Log Page: Supported 00:19:09.557 Get Log Page Extended Data: Supported 00:19:09.557 Telemetry Log Pages: Not Supported 00:19:09.557 Persistent Event Log Pages: Not Supported 00:19:09.557 Supported Log Pages Log Page: May Support 00:19:09.557 Commands Supported & Effects Log Page: Not Supported 00:19:09.557 Feature Identifiers & Effects Log Page:May Support 00:19:09.557 NVMe-MI Commands & Effects Log Page: May Support 00:19:09.557 Data Area 4 for Telemetry Log: Not Supported 00:19:09.557 Error Log Page Entries Supported: 1 00:19:09.557 Keep Alive: Not Supported 00:19:09.557 00:19:09.557 NVM Command Set Attributes 00:19:09.557 ========================== 00:19:09.557 Submission Queue Entry Size 00:19:09.557 Max: 64 00:19:09.557 Min: 64 00:19:09.557 Completion Queue Entry Size 00:19:09.557 Max: 16 00:19:09.557 Min: 16 00:19:09.557 Number of Namespaces: 256 00:19:09.557 Compare Command: Supported 00:19:09.557 Write Uncorrectable Command: Not Supported 00:19:09.557 Dataset Management Command: Supported 00:19:09.557 Write Zeroes Command: Supported 00:19:09.557 Set Features Save Field: Supported 00:19:09.557 Reservations: Not Supported 00:19:09.557 Timestamp: Supported 00:19:09.557 Copy: Supported 00:19:09.557 Volatile Write Cache: Present 00:19:09.557 Atomic Write Unit (Normal): 1 00:19:09.557 Atomic Write Unit (PFail): 1 00:19:09.557 Atomic Compare & Write Unit: 1 00:19:09.557 Fused Compare & Write: Not Supported 00:19:09.557 Scatter-Gather List 00:19:09.557 SGL Command Set: Supported 00:19:09.557 SGL Keyed: Not Supported 00:19:09.557 SGL Bit Bucket Descriptor: Not Supported 00:19:09.557 SGL Metadata Pointer: Not Supported 00:19:09.557 Oversized SGL: Not Supported 00:19:09.557 SGL Metadata Address: Not Supported 00:19:09.557 SGL Offset: Not Supported 00:19:09.557 Transport SGL Data Block: Not Supported 00:19:09.557 Replay Protected Memory Block: Not Supported 00:19:09.557 00:19:09.557 Firmware Slot Information 00:19:09.557 ========================= 00:19:09.557 Active slot: 1 00:19:09.557 Slot 1 Firmware Revision: 1.0 00:19:09.557 00:19:09.557 00:19:09.557 Commands Supported and Effects 00:19:09.557 ============================== 00:19:09.557 Admin Commands 00:19:09.557 -------------- 00:19:09.557 Delete I/O Submission Queue (00h): Supported 00:19:09.557 Create I/O Submission Queue (01h): Supported 00:19:09.557 Get Log Page (02h): Supported 00:19:09.557 Delete I/O Completion Queue (04h): Supported 00:19:09.557 Create I/O Completion Queue (05h): Supported 00:19:09.557 Identify (06h): Supported 00:19:09.557 Abort (08h): Supported 00:19:09.557 Set Features (09h): Supported 00:19:09.557 Get Features (0Ah): Supported 00:19:09.557 Asynchronous Event Request (0Ch): Supported 00:19:09.557 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:09.557 Directive Send (19h): Supported 00:19:09.557 Directive Receive (1Ah): Supported 00:19:09.557 Virtualization Management (1Ch): Supported 00:19:09.557 Doorbell Buffer Config (7Ch): Supported 00:19:09.557 Format NVM (80h): Supported LBA-Change 00:19:09.557 I/O Commands 00:19:09.557 ------------ 00:19:09.557 Flush (00h): Supported LBA-Change 00:19:09.557 Write (01h): Supported LBA-Change 00:19:09.557 Read (02h): Supported 00:19:09.557 Compare (05h): Supported 00:19:09.557 Write Zeroes (08h): Supported LBA-Change 00:19:09.557 Dataset Management (09h): Supported LBA-Change 00:19:09.557 Unknown (0Ch): Supported 00:19:09.557 Unknown (12h): Supported 00:19:09.557 Copy (19h): Supported LBA-Change 00:19:09.557 Unknown (1Dh): Supported LBA-Change 00:19:09.557 00:19:09.557 Error Log 00:19:09.557 ========= 00:19:09.557 00:19:09.557 Arbitration 00:19:09.557 =========== 00:19:09.557 Arbitration Burst: no limit 00:19:09.557 00:19:09.557 Power Management 00:19:09.557 ================ 00:19:09.557 Number of Power States: 1 00:19:09.557 Current Power State: Power State #0 00:19:09.557 Power State #0: 00:19:09.557 Max Power: 25.00 W 00:19:09.557 Non-Operational State: Operational 00:19:09.557 Entry Latency: 16 microseconds 00:19:09.557 Exit Latency: 4 microseconds 00:19:09.557 Relative Read Throughput: 0 00:19:09.557 Relative Read Latency: 0 00:19:09.557 Relative Write Throughput: 0 00:19:09.557 Relative Write Latency: 0 00:19:09.557 Idle Power: Not Reported 00:19:09.557 Active Power: Not Reported 00:19:09.557 Non-Operational Permissive Mode: Not Supported 00:19:09.557 00:19:09.557 Health Information 00:19:09.557 ================== 00:19:09.557 Critical Warnings: 00:19:09.557 Available Spare Space: OK 00:19:09.557 Temperature: OK 00:19:09.557 Device Reliability: OK 00:19:09.557 Read Only: No 00:19:09.557 Volatile Memory Backup: OK 00:19:09.557 Current Temperature: 323 Kelvin (50 Celsius) 00:19:09.557 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:09.557 Available Spare: 0% 00:19:09.557 Available Spare Threshold: 0% 00:19:09.557 Life Percentage Used: 0% 00:19:09.557 Data Units Read: 775 00:19:09.557 Data Units Written: 704 00:19:09.557 Host Read Commands: 34031 00:19:09.557 Host Write Commands: 33454 00:19:09.557 Controller Busy Time: 0 minutes 00:19:09.557 Power Cycles: 0 00:19:09.557 Power On Hours: 0 hours 00:19:09.557 Unsafe Shutdowns: 0 00:19:09.557 Unrecoverable Media Errors: 0 00:19:09.557 Lifetime Error Log Entries: 0 00:19:09.557 Warning Temperature Time: 0 minutes 00:19:09.557 Critical Temperature Time: 0 minutes 00:19:09.557 00:19:09.557 Number of Queues 00:19:09.557 ================ 00:19:09.557 Number of I/O Submission Queues: 64 00:19:09.557 Number of I/O Completion Queues: 64 00:19:09.557 00:19:09.557 ZNS Specific Controller Data 00:19:09.557 ============================ 00:19:09.557 Zone Append Size Limit: 0 00:19:09.557 00:19:09.557 00:19:09.557 Active Namespaces 00:19:09.557 ================= 00:19:09.557 Namespace ID:1 00:19:09.557 Error Recovery Timeout: Unlimited 00:19:09.557 Command Set Identifier: NVM (00h) 00:19:09.557 Deallocate: Supported 00:19:09.557 Deallocated/Unwritten Error: Supported 00:19:09.557 Deallocated Read Value: All 0x00 00:19:09.557 Deallocate in Write Zeroes: Not Supported 00:19:09.557 Deallocated Guard Field: 0xFFFF 00:19:09.557 Flush: Supported 00:19:09.557 Reservation: Not Supported 00:19:09.557 Namespace Sharing Capabilities: Multiple Controllers 00:19:09.557 Size (in LBAs): 262144 (1GiB) 00:19:09.557 Capacity (in LBAs): 262144 (1GiB) 00:19:09.557 Utilization (in LBAs): 262144 (1GiB) 00:19:09.557 Thin Provisioning: Not Supported 00:19:09.557 Per-NS Atomic Units: No 00:19:09.557 Maximum Single Source Range Length: 128 00:19:09.557 Maximum Copy Length: 128 00:19:09.557 Maximum Source Range Count: 128 00:19:09.557 NGUID/EUI64 Never Reused: No 00:19:09.557 Namespace Write Protected: No 00:19:09.557 Endurance group ID: 1 00:19:09.557 Number of LBA Formats: 8 00:19:09.557 Current LBA Format: LBA Format #04 00:19:09.557 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:09.557 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:09.557 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:09.557 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:09.557 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:09.557 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:09.557 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:09.557 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:09.557 00:19:09.557 Get Feature FDP: 00:19:09.557 ================ 00:19:09.557 Enabled: Yes 00:19:09.557 FDP configuration index: 0 00:19:09.557 00:19:09.557 FDP configurations log page 00:19:09.557 =========================== 00:19:09.557 Number of FDP configurations: 1 00:19:09.557 Version: 0 00:19:09.557 Size: 112 00:19:09.557 FDP Configuration Descriptor: 0 00:19:09.557 Descriptor Size: 96 00:19:09.557 Reclaim Group Identifier format: 2 00:19:09.557 FDP Volatile Write Cache: Not Present 00:19:09.557 FDP Configuration: Valid 00:19:09.557 Vendor Specific Size: 0 00:19:09.557 Number of Reclaim Groups: 2 00:19:09.557 Number of Recalim Unit Handles: 8 00:19:09.557 Max Placement Identifiers: 128 00:19:09.557 Number of Namespaces Suppprted: 256 00:19:09.557 Reclaim unit Nominal Size: 6000000 bytes 00:19:09.557 Estimated Reclaim Unit Time Limit: Not Reported 00:19:09.557 RUH Desc #000: RUH Type: Initially Isolated 00:19:09.557 RUH Desc #001: RUH Type: Initially Isolated 00:19:09.557 RUH Desc #002: RUH Type: Initially Isolated 00:19:09.557 RUH Desc #003: RUH Type: Initially Isolated 00:19:09.557 RUH Desc #004: RUH Type: Initially Isolated 00:19:09.557 RUH Desc #005: RUH Type: Initially Isolated 00:19:09.557 RUH Desc #006: RUH Type: Initially Isolated 00:19:09.557 RUH Desc #007: RUH Type: Initially Isolated 00:19:09.557 00:19:09.557 FDP reclaim unit handle usage log page 00:19:09.557 ====================================== 00:19:09.557 Number of Reclaim Unit Handles: 8 00:19:09.557 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:19:09.557 RUH Usage Desc #001: RUH Attributes: Unused 00:19:09.557 RUH Usage Desc #002: RUH Attributes: Unused 00:19:09.557 RUH Usage Desc #003: RUH Attributes: Unused 00:19:09.557 RUH Usage Desc #004: RUH Attributes: Unused 00:19:09.557 RUH Usage Desc #005: RUH Attributes: Unused 00:19:09.557 RUH Usage Desc #006: RUH Attributes: Unused 00:19:09.557 RUH Usage Desc #007: RUH Attributes: Unused 00:19:09.557 00:19:09.557 FDP statistics log page 00:19:09.557 ======================= 00:19:09.557 Host bytes with metadata written: 442998784 00:19:09.557 Media bytes with metadata written: 443064320 00:19:09.557 Media bytes erased: 0 00:19:09.557 00:19:09.557 FDP events log page 00:19:09.557 =================== 00:19:09.557 Number of FDP events: 0 00:19:09.557 00:19:09.557 NVM Specific Namespace Data 00:19:09.557 =========================== 00:19:09.557 Logical Block Storage Tag Mask: 0 00:19:09.557 Protection Information Capabilities: 00:19:09.557 16b Guard Protection Information Storage Tag Support: No 00:19:09.557 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:09.557 Storage Tag Check Read Support: No 00:19:09.557 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:09.557 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:09.557 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:09.557 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:09.557 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:09.557 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:09.557 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:09.557 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:09.557 ************************************ 00:19:09.557 END TEST nvme_identify 00:19:09.557 00:19:09.557 real 0m1.922s 00:19:09.557 user 0m0.758s 00:19:09.557 sys 0m0.949s 00:19:09.557 11:33:15 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:09.557 11:33:15 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:19:09.557 ************************************ 00:19:09.557 11:33:15 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:19:09.557 11:33:15 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:09.557 11:33:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:09.557 11:33:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:09.557 ************************************ 00:19:09.557 START TEST nvme_perf 00:19:09.557 ************************************ 00:19:09.557 11:33:15 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:19:09.557 11:33:15 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:19:10.938 Initializing NVMe Controllers 00:19:10.938 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:10.938 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:19:10.938 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:19:10.938 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:19:10.938 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:19:10.938 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:19:10.938 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:19:10.938 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:19:10.938 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:19:10.938 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:19:10.938 Initialization complete. Launching workers. 00:19:10.938 ======================================================== 00:19:10.938 Latency(us) 00:19:10.938 Device Information : IOPS MiB/s Average min max 00:19:10.938 PCIE (0000:00:10.0) NSID 1 from core 0: 12083.87 141.61 10607.00 7830.05 48002.40 00:19:10.938 PCIE (0000:00:11.0) NSID 1 from core 0: 12083.87 141.61 10579.47 7936.63 45092.43 00:19:10.938 PCIE (0000:00:13.0) NSID 1 from core 0: 12083.87 141.61 10550.09 8000.15 42753.84 00:19:10.938 PCIE (0000:00:12.0) NSID 1 from core 0: 12083.87 141.61 10520.77 7995.56 39739.75 00:19:10.938 PCIE (0000:00:12.0) NSID 2 from core 0: 12083.87 141.61 10491.61 8048.63 36785.80 00:19:10.938 PCIE (0000:00:12.0) NSID 3 from core 0: 12083.87 141.61 10462.31 8114.45 33774.94 00:19:10.938 ======================================================== 00:19:10.938 Total : 72503.21 849.65 10535.21 7830.05 48002.40 00:19:10.938 00:19:10.938 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:19:10.938 ================================================================================= 00:19:10.938 1.00000% : 8400.524us 00:19:10.938 10.00000% : 9294.196us 00:19:10.938 25.00000% : 9830.400us 00:19:10.938 50.00000% : 10307.025us 00:19:10.938 75.00000% : 10783.651us 00:19:10.938 90.00000% : 11260.276us 00:19:10.938 95.00000% : 11736.902us 00:19:10.938 98.00000% : 12988.044us 00:19:10.938 99.00000% : 37653.411us 00:19:10.938 99.50000% : 45517.731us 00:19:10.938 99.90000% : 47662.545us 00:19:10.938 99.99000% : 48139.171us 00:19:10.938 99.99900% : 48139.171us 00:19:10.938 99.99990% : 48139.171us 00:19:10.938 99.99999% : 48139.171us 00:19:10.938 00:19:10.938 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:19:10.938 ================================================================================= 00:19:10.938 1.00000% : 8519.680us 00:19:10.938 10.00000% : 9234.618us 00:19:10.938 25.00000% : 9889.978us 00:19:10.938 50.00000% : 10307.025us 00:19:10.938 75.00000% : 10724.073us 00:19:10.938 90.00000% : 11141.120us 00:19:10.938 95.00000% : 11736.902us 00:19:10.938 98.00000% : 12868.887us 00:19:10.938 99.00000% : 35270.284us 00:19:10.938 99.50000% : 42896.291us 00:19:10.938 99.90000% : 44802.793us 00:19:10.938 99.99000% : 45279.418us 00:19:10.938 99.99900% : 45279.418us 00:19:10.938 99.99990% : 45279.418us 00:19:10.938 99.99999% : 45279.418us 00:19:10.938 00:19:10.938 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:19:10.938 ================================================================================= 00:19:10.938 1.00000% : 8519.680us 00:19:10.938 10.00000% : 9294.196us 00:19:10.938 25.00000% : 9889.978us 00:19:10.938 50.00000% : 10307.025us 00:19:10.938 75.00000% : 10724.073us 00:19:10.938 90.00000% : 11141.120us 00:19:10.938 95.00000% : 11736.902us 00:19:10.938 98.00000% : 12749.731us 00:19:10.938 99.00000% : 32887.156us 00:19:10.938 99.50000% : 40274.851us 00:19:10.938 99.90000% : 42419.665us 00:19:10.938 99.99000% : 42896.291us 00:19:10.938 99.99900% : 42896.291us 00:19:10.938 99.99990% : 42896.291us 00:19:10.938 99.99999% : 42896.291us 00:19:10.938 00:19:10.938 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:19:10.938 ================================================================================= 00:19:10.938 1.00000% : 8519.680us 00:19:10.938 10.00000% : 9294.196us 00:19:10.938 25.00000% : 9889.978us 00:19:10.938 50.00000% : 10307.025us 00:19:10.938 75.00000% : 10724.073us 00:19:10.938 90.00000% : 11141.120us 00:19:10.938 95.00000% : 11677.324us 00:19:10.938 98.00000% : 12809.309us 00:19:10.938 99.00000% : 30027.404us 00:19:10.938 99.50000% : 37415.098us 00:19:10.938 99.90000% : 39321.600us 00:19:10.938 99.99000% : 39798.225us 00:19:10.938 99.99900% : 39798.225us 00:19:10.938 99.99990% : 39798.225us 00:19:10.938 99.99999% : 39798.225us 00:19:10.938 00:19:10.938 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:19:10.938 ================================================================================= 00:19:10.938 1.00000% : 8519.680us 00:19:10.938 10.00000% : 9234.618us 00:19:10.938 25.00000% : 9889.978us 00:19:10.938 50.00000% : 10307.025us 00:19:10.938 75.00000% : 10724.073us 00:19:10.938 90.00000% : 11141.120us 00:19:10.938 95.00000% : 11677.324us 00:19:10.938 98.00000% : 12928.465us 00:19:10.938 99.00000% : 27167.651us 00:19:10.938 99.50000% : 34555.345us 00:19:10.938 99.90000% : 36461.847us 00:19:10.938 99.99000% : 36938.473us 00:19:10.938 99.99900% : 36938.473us 00:19:10.938 99.99990% : 36938.473us 00:19:10.938 99.99999% : 36938.473us 00:19:10.938 00:19:10.938 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:19:10.938 ================================================================================= 00:19:10.938 1.00000% : 8519.680us 00:19:10.938 10.00000% : 9234.618us 00:19:10.938 25.00000% : 9889.978us 00:19:10.938 50.00000% : 10307.025us 00:19:10.938 75.00000% : 10724.073us 00:19:10.938 90.00000% : 11141.120us 00:19:10.938 95.00000% : 11677.324us 00:19:10.938 98.00000% : 12928.465us 00:19:10.938 99.00000% : 24307.898us 00:19:10.938 99.50000% : 31695.593us 00:19:10.938 99.90000% : 33363.782us 00:19:10.938 99.99000% : 33840.407us 00:19:10.939 99.99900% : 33840.407us 00:19:10.939 99.99990% : 33840.407us 00:19:10.939 99.99999% : 33840.407us 00:19:10.939 00:19:10.939 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:19:10.939 ============================================================================== 00:19:10.939 Range in us Cumulative IO count 00:19:10.939 7804.742 - 7864.320: 0.0413% ( 5) 00:19:10.939 7864.320 - 7923.898: 0.0579% ( 2) 00:19:10.939 7923.898 - 7983.476: 0.0744% ( 2) 00:19:10.939 7983.476 - 8043.055: 0.0992% ( 3) 00:19:10.939 8043.055 - 8102.633: 0.1488% ( 6) 00:19:10.939 8102.633 - 8162.211: 0.2480% ( 12) 00:19:10.939 8162.211 - 8221.789: 0.3307% ( 10) 00:19:10.939 8221.789 - 8281.367: 0.4960% ( 20) 00:19:10.939 8281.367 - 8340.945: 0.8102% ( 38) 00:19:10.939 8340.945 - 8400.524: 1.0499% ( 29) 00:19:10.939 8400.524 - 8460.102: 1.5377% ( 59) 00:19:10.939 8460.102 - 8519.680: 2.0172% ( 58) 00:19:10.939 8519.680 - 8579.258: 2.5215% ( 61) 00:19:10.939 8579.258 - 8638.836: 3.1498% ( 76) 00:19:10.939 8638.836 - 8698.415: 3.8194% ( 81) 00:19:10.939 8698.415 - 8757.993: 4.4891% ( 81) 00:19:10.939 8757.993 - 8817.571: 5.1422% ( 79) 00:19:10.939 8817.571 - 8877.149: 5.7622% ( 75) 00:19:10.939 8877.149 - 8936.727: 6.4153% ( 79) 00:19:10.939 8936.727 - 8996.305: 7.0850% ( 81) 00:19:10.939 8996.305 - 9055.884: 7.7298% ( 78) 00:19:10.939 9055.884 - 9115.462: 8.4077% ( 82) 00:19:10.939 9115.462 - 9175.040: 9.0691% ( 80) 00:19:10.939 9175.040 - 9234.618: 9.7553% ( 83) 00:19:10.939 9234.618 - 9294.196: 10.5324% ( 94) 00:19:10.939 9294.196 - 9353.775: 11.4501% ( 111) 00:19:10.939 9353.775 - 9413.353: 12.6571% ( 146) 00:19:10.939 9413.353 - 9472.931: 14.0460% ( 168) 00:19:10.939 9472.931 - 9532.509: 15.5589% ( 183) 00:19:10.939 9532.509 - 9592.087: 17.4024% ( 223) 00:19:10.939 9592.087 - 9651.665: 19.4940% ( 253) 00:19:10.939 9651.665 - 9711.244: 21.6518% ( 261) 00:19:10.939 9711.244 - 9770.822: 24.0575% ( 291) 00:19:10.939 9770.822 - 9830.400: 26.5377% ( 300) 00:19:10.939 9830.400 - 9889.978: 29.2576% ( 329) 00:19:10.939 9889.978 - 9949.556: 32.2669% ( 364) 00:19:10.939 9949.556 - 10009.135: 35.1604% ( 350) 00:19:10.939 10009.135 - 10068.713: 38.3433% ( 385) 00:19:10.939 10068.713 - 10128.291: 41.5344% ( 386) 00:19:10.939 10128.291 - 10187.869: 44.6346% ( 375) 00:19:10.939 10187.869 - 10247.447: 48.0820% ( 417) 00:19:10.939 10247.447 - 10307.025: 51.3972% ( 401) 00:19:10.939 10307.025 - 10366.604: 54.6048% ( 388) 00:19:10.939 10366.604 - 10426.182: 58.0109% ( 412) 00:19:10.939 10426.182 - 10485.760: 61.3343% ( 402) 00:19:10.939 10485.760 - 10545.338: 64.7156% ( 409) 00:19:10.939 10545.338 - 10604.916: 67.8323% ( 377) 00:19:10.939 10604.916 - 10664.495: 71.1888% ( 406) 00:19:10.939 10664.495 - 10724.073: 74.4296% ( 392) 00:19:10.939 10724.073 - 10783.651: 77.3231% ( 350) 00:19:10.939 10783.651 - 10843.229: 80.0430% ( 329) 00:19:10.939 10843.229 - 10902.807: 82.3247% ( 276) 00:19:10.939 10902.807 - 10962.385: 84.2675% ( 235) 00:19:10.939 10962.385 - 11021.964: 86.0202% ( 212) 00:19:10.939 11021.964 - 11081.542: 87.4504% ( 173) 00:19:10.939 11081.542 - 11141.120: 88.7979% ( 163) 00:19:10.939 11141.120 - 11200.698: 89.9223% ( 136) 00:19:10.939 11200.698 - 11260.276: 90.9061% ( 119) 00:19:10.939 11260.276 - 11319.855: 91.7741% ( 105) 00:19:10.939 11319.855 - 11379.433: 92.4438% ( 81) 00:19:10.939 11379.433 - 11439.011: 93.0721% ( 76) 00:19:10.939 11439.011 - 11498.589: 93.5351% ( 56) 00:19:10.939 11498.589 - 11558.167: 93.9732% ( 53) 00:19:10.939 11558.167 - 11617.745: 94.3700% ( 48) 00:19:10.939 11617.745 - 11677.324: 94.7338% ( 44) 00:19:10.939 11677.324 - 11736.902: 95.0810% ( 42) 00:19:10.939 11736.902 - 11796.480: 95.4117% ( 40) 00:19:10.939 11796.480 - 11856.058: 95.6515% ( 29) 00:19:10.939 11856.058 - 11915.636: 95.8747% ( 27) 00:19:10.939 11915.636 - 11975.215: 96.0483% ( 21) 00:19:10.939 11975.215 - 12034.793: 96.1392% ( 11) 00:19:10.939 12034.793 - 12094.371: 96.2880% ( 18) 00:19:10.939 12094.371 - 12153.949: 96.4120% ( 15) 00:19:10.939 12153.949 - 12213.527: 96.5360% ( 15) 00:19:10.939 12213.527 - 12273.105: 96.6601% ( 15) 00:19:10.939 12273.105 - 12332.684: 96.7675% ( 13) 00:19:10.939 12332.684 - 12392.262: 96.9411% ( 21) 00:19:10.939 12392.262 - 12451.840: 97.0651% ( 15) 00:19:10.939 12451.840 - 12511.418: 97.2057% ( 17) 00:19:10.939 12511.418 - 12570.996: 97.3545% ( 18) 00:19:10.939 12570.996 - 12630.575: 97.4620% ( 13) 00:19:10.939 12630.575 - 12690.153: 97.5694% ( 13) 00:19:10.939 12690.153 - 12749.731: 97.6604% ( 11) 00:19:10.939 12749.731 - 12809.309: 97.7679% ( 13) 00:19:10.939 12809.309 - 12868.887: 97.8919% ( 15) 00:19:10.939 12868.887 - 12928.465: 97.9745% ( 10) 00:19:10.939 12928.465 - 12988.044: 98.0820% ( 13) 00:19:10.939 12988.044 - 13047.622: 98.1895% ( 13) 00:19:10.939 13047.622 - 13107.200: 98.2887% ( 12) 00:19:10.939 13107.200 - 13166.778: 98.3879% ( 12) 00:19:10.939 13166.778 - 13226.356: 98.4540% ( 8) 00:19:10.939 13226.356 - 13285.935: 98.5036% ( 6) 00:19:10.939 13285.935 - 13345.513: 98.5532% ( 6) 00:19:10.939 13345.513 - 13405.091: 98.6111% ( 7) 00:19:10.939 13405.091 - 13464.669: 98.6442% ( 4) 00:19:10.939 13464.669 - 13524.247: 98.6938% ( 6) 00:19:10.939 13524.247 - 13583.825: 98.7269% ( 4) 00:19:10.939 13583.825 - 13643.404: 98.7351% ( 1) 00:19:10.939 13643.404 - 13702.982: 98.7599% ( 3) 00:19:10.939 13702.982 - 13762.560: 98.7765% ( 2) 00:19:10.939 13762.560 - 13822.138: 98.8013% ( 3) 00:19:10.939 13822.138 - 13881.716: 98.8261% ( 3) 00:19:10.939 13881.716 - 13941.295: 98.8591% ( 4) 00:19:10.939 13941.295 - 14000.873: 98.8674% ( 1) 00:19:10.939 14000.873 - 14060.451: 98.9005% ( 4) 00:19:10.939 14060.451 - 14120.029: 98.9170% ( 2) 00:19:10.939 14120.029 - 14179.607: 98.9335% ( 2) 00:19:10.939 14179.607 - 14239.185: 98.9418% ( 1) 00:19:10.939 36938.473 - 37176.785: 98.9583% ( 2) 00:19:10.939 37176.785 - 37415.098: 98.9914% ( 4) 00:19:10.939 37415.098 - 37653.411: 99.0327% ( 5) 00:19:10.939 37653.411 - 37891.724: 99.0823% ( 6) 00:19:10.939 37891.724 - 38130.036: 99.1237% ( 5) 00:19:10.939 38130.036 - 38368.349: 99.1650% ( 5) 00:19:10.939 38368.349 - 38606.662: 99.2063% ( 5) 00:19:10.939 38606.662 - 38844.975: 99.2477% ( 5) 00:19:10.939 38844.975 - 39083.287: 99.2973% ( 6) 00:19:10.939 39083.287 - 39321.600: 99.3386% ( 5) 00:19:10.939 39321.600 - 39559.913: 99.3717% ( 4) 00:19:10.939 39559.913 - 39798.225: 99.4213% ( 6) 00:19:10.939 39798.225 - 40036.538: 99.4709% ( 6) 00:19:10.939 45041.105 - 45279.418: 99.4874% ( 2) 00:19:10.939 45279.418 - 45517.731: 99.5288% ( 5) 00:19:10.939 45517.731 - 45756.044: 99.5784% ( 6) 00:19:10.939 45756.044 - 45994.356: 99.6280% ( 6) 00:19:10.939 45994.356 - 46232.669: 99.6693% ( 5) 00:19:10.939 46232.669 - 46470.982: 99.7189% ( 6) 00:19:10.939 46470.982 - 46709.295: 99.7603% ( 5) 00:19:10.939 46709.295 - 46947.607: 99.8099% ( 6) 00:19:10.939 46947.607 - 47185.920: 99.8512% ( 5) 00:19:10.939 47185.920 - 47424.233: 99.8925% ( 5) 00:19:10.939 47424.233 - 47662.545: 99.9339% ( 5) 00:19:10.939 47662.545 - 47900.858: 99.9835% ( 6) 00:19:10.939 47900.858 - 48139.171: 100.0000% ( 2) 00:19:10.939 00:19:10.939 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:19:10.939 ============================================================================== 00:19:10.939 Range in us Cumulative IO count 00:19:10.939 7923.898 - 7983.476: 0.0413% ( 5) 00:19:10.939 7983.476 - 8043.055: 0.0661% ( 3) 00:19:10.939 8043.055 - 8102.633: 0.0992% ( 4) 00:19:10.939 8102.633 - 8162.211: 0.1405% ( 5) 00:19:10.940 8162.211 - 8221.789: 0.1819% ( 5) 00:19:10.940 8221.789 - 8281.367: 0.2728% ( 11) 00:19:10.940 8281.367 - 8340.945: 0.4464% ( 21) 00:19:10.940 8340.945 - 8400.524: 0.6448% ( 24) 00:19:10.940 8400.524 - 8460.102: 0.9838% ( 41) 00:19:10.940 8460.102 - 8519.680: 1.4633% ( 58) 00:19:10.940 8519.680 - 8579.258: 1.9263% ( 56) 00:19:10.940 8579.258 - 8638.836: 2.5298% ( 73) 00:19:10.940 8638.836 - 8698.415: 3.1829% ( 79) 00:19:10.940 8698.415 - 8757.993: 3.8608% ( 82) 00:19:10.940 8757.993 - 8817.571: 4.5966% ( 89) 00:19:10.940 8817.571 - 8877.149: 5.3902% ( 96) 00:19:10.940 8877.149 - 8936.727: 6.1839% ( 96) 00:19:10.940 8936.727 - 8996.305: 6.9692% ( 95) 00:19:10.940 8996.305 - 9055.884: 7.7298% ( 92) 00:19:10.940 9055.884 - 9115.462: 8.4987% ( 93) 00:19:10.940 9115.462 - 9175.040: 9.2345% ( 89) 00:19:10.940 9175.040 - 9234.618: 10.0612% ( 100) 00:19:10.940 9234.618 - 9294.196: 10.8879% ( 100) 00:19:10.940 9294.196 - 9353.775: 11.7063% ( 99) 00:19:10.940 9353.775 - 9413.353: 12.5331% ( 100) 00:19:10.940 9413.353 - 9472.931: 13.3929% ( 104) 00:19:10.940 9472.931 - 9532.509: 14.5007% ( 134) 00:19:10.940 9532.509 - 9592.087: 15.8565% ( 164) 00:19:10.940 9592.087 - 9651.665: 17.5926% ( 210) 00:19:10.940 9651.665 - 9711.244: 19.4031% ( 219) 00:19:10.940 9711.244 - 9770.822: 21.5774% ( 263) 00:19:10.940 9770.822 - 9830.400: 24.0989% ( 305) 00:19:10.940 9830.400 - 9889.978: 26.6700% ( 311) 00:19:10.940 9889.978 - 9949.556: 29.6958% ( 366) 00:19:10.940 9949.556 - 10009.135: 32.7464% ( 369) 00:19:10.940 10009.135 - 10068.713: 36.0036% ( 394) 00:19:10.940 10068.713 - 10128.291: 39.6991% ( 447) 00:19:10.940 10128.291 - 10187.869: 43.4276% ( 451) 00:19:10.940 10187.869 - 10247.447: 47.2470% ( 462) 00:19:10.940 10247.447 - 10307.025: 51.0086% ( 455) 00:19:10.940 10307.025 - 10366.604: 54.8115% ( 460) 00:19:10.940 10366.604 - 10426.182: 58.6723% ( 467) 00:19:10.940 10426.182 - 10485.760: 62.5827% ( 473) 00:19:10.940 10485.760 - 10545.338: 66.4352% ( 466) 00:19:10.940 10545.338 - 10604.916: 70.2960% ( 467) 00:19:10.940 10604.916 - 10664.495: 73.9170% ( 438) 00:19:10.940 10664.495 - 10724.073: 77.1908% ( 396) 00:19:10.940 10724.073 - 10783.651: 79.9934% ( 339) 00:19:10.940 10783.651 - 10843.229: 82.2255% ( 270) 00:19:10.940 10843.229 - 10902.807: 84.2593% ( 246) 00:19:10.940 10902.807 - 10962.385: 86.1524% ( 229) 00:19:10.940 10962.385 - 11021.964: 87.8390% ( 204) 00:19:10.940 11021.964 - 11081.542: 89.0956% ( 152) 00:19:10.940 11081.542 - 11141.120: 90.1124% ( 123) 00:19:10.940 11141.120 - 11200.698: 90.9474% ( 101) 00:19:10.940 11200.698 - 11260.276: 91.7163% ( 93) 00:19:10.940 11260.276 - 11319.855: 92.3198% ( 73) 00:19:10.940 11319.855 - 11379.433: 92.8406% ( 63) 00:19:10.940 11379.433 - 11439.011: 93.3201% ( 58) 00:19:10.940 11439.011 - 11498.589: 93.8079% ( 59) 00:19:10.940 11498.589 - 11558.167: 94.2130% ( 49) 00:19:10.940 11558.167 - 11617.745: 94.5767% ( 44) 00:19:10.940 11617.745 - 11677.324: 94.8661% ( 35) 00:19:10.940 11677.324 - 11736.902: 95.1058% ( 29) 00:19:10.940 11736.902 - 11796.480: 95.2877% ( 22) 00:19:10.940 11796.480 - 11856.058: 95.4778% ( 23) 00:19:10.940 11856.058 - 11915.636: 95.6515% ( 21) 00:19:10.940 11915.636 - 11975.215: 95.7920% ( 17) 00:19:10.940 11975.215 - 12034.793: 95.9408% ( 18) 00:19:10.940 12034.793 - 12094.371: 96.0731% ( 16) 00:19:10.940 12094.371 - 12153.949: 96.2384% ( 20) 00:19:10.940 12153.949 - 12213.527: 96.4534% ( 26) 00:19:10.940 12213.527 - 12273.105: 96.6766% ( 27) 00:19:10.940 12273.105 - 12332.684: 96.8502% ( 21) 00:19:10.940 12332.684 - 12392.262: 97.0321% ( 22) 00:19:10.940 12392.262 - 12451.840: 97.1644% ( 16) 00:19:10.940 12451.840 - 12511.418: 97.3049% ( 17) 00:19:10.940 12511.418 - 12570.996: 97.4124% ( 13) 00:19:10.940 12570.996 - 12630.575: 97.5364% ( 15) 00:19:10.940 12630.575 - 12690.153: 97.6687% ( 16) 00:19:10.940 12690.153 - 12749.731: 97.8009% ( 16) 00:19:10.940 12749.731 - 12809.309: 97.9249% ( 15) 00:19:10.940 12809.309 - 12868.887: 98.0489% ( 15) 00:19:10.940 12868.887 - 12928.465: 98.1895% ( 17) 00:19:10.940 12928.465 - 12988.044: 98.3052% ( 14) 00:19:10.940 12988.044 - 13047.622: 98.4210% ( 14) 00:19:10.940 13047.622 - 13107.200: 98.5284% ( 13) 00:19:10.940 13107.200 - 13166.778: 98.6111% ( 10) 00:19:10.940 13166.778 - 13226.356: 98.6690% ( 7) 00:19:10.940 13226.356 - 13285.935: 98.7434% ( 9) 00:19:10.940 13285.935 - 13345.513: 98.7847% ( 5) 00:19:10.940 13345.513 - 13405.091: 98.8178% ( 4) 00:19:10.940 13405.091 - 13464.669: 98.8591% ( 5) 00:19:10.940 13464.669 - 13524.247: 98.9005% ( 5) 00:19:10.940 13524.247 - 13583.825: 98.9418% ( 5) 00:19:10.940 34793.658 - 35031.971: 98.9831% ( 5) 00:19:10.940 35031.971 - 35270.284: 99.0245% ( 5) 00:19:10.940 35270.284 - 35508.596: 99.0658% ( 5) 00:19:10.940 35508.596 - 35746.909: 99.1154% ( 6) 00:19:10.940 35746.909 - 35985.222: 99.1567% ( 5) 00:19:10.940 35985.222 - 36223.535: 99.2063% ( 6) 00:19:10.940 36223.535 - 36461.847: 99.2560% ( 6) 00:19:10.940 36461.847 - 36700.160: 99.2890% ( 4) 00:19:10.940 36700.160 - 36938.473: 99.3469% ( 7) 00:19:10.940 36938.473 - 37176.785: 99.3882% ( 5) 00:19:10.940 37176.785 - 37415.098: 99.4378% ( 6) 00:19:10.940 37415.098 - 37653.411: 99.4709% ( 4) 00:19:10.940 42419.665 - 42657.978: 99.4957% ( 3) 00:19:10.940 42657.978 - 42896.291: 99.5453% ( 6) 00:19:10.940 42896.291 - 43134.604: 99.5949% ( 6) 00:19:10.940 43134.604 - 43372.916: 99.6362% ( 5) 00:19:10.940 43372.916 - 43611.229: 99.6858% ( 6) 00:19:10.940 43611.229 - 43849.542: 99.7354% ( 6) 00:19:10.940 43849.542 - 44087.855: 99.7851% ( 6) 00:19:10.940 44087.855 - 44326.167: 99.8347% ( 6) 00:19:10.940 44326.167 - 44564.480: 99.8925% ( 7) 00:19:10.940 44564.480 - 44802.793: 99.9339% ( 5) 00:19:10.940 44802.793 - 45041.105: 99.9835% ( 6) 00:19:10.940 45041.105 - 45279.418: 100.0000% ( 2) 00:19:10.940 00:19:10.940 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:19:10.940 ============================================================================== 00:19:10.940 Range in us Cumulative IO count 00:19:10.940 7983.476 - 8043.055: 0.0331% ( 4) 00:19:10.940 8043.055 - 8102.633: 0.0579% ( 3) 00:19:10.940 8102.633 - 8162.211: 0.1075% ( 6) 00:19:10.940 8162.211 - 8221.789: 0.1488% ( 5) 00:19:10.940 8221.789 - 8281.367: 0.2563% ( 13) 00:19:10.940 8281.367 - 8340.945: 0.4299% ( 21) 00:19:10.940 8340.945 - 8400.524: 0.6614% ( 28) 00:19:10.940 8400.524 - 8460.102: 0.9921% ( 40) 00:19:10.940 8460.102 - 8519.680: 1.3972% ( 49) 00:19:10.940 8519.680 - 8579.258: 1.9428% ( 66) 00:19:10.940 8579.258 - 8638.836: 2.5050% ( 68) 00:19:10.941 8638.836 - 8698.415: 3.1085% ( 73) 00:19:10.941 8698.415 - 8757.993: 3.8773% ( 93) 00:19:10.941 8757.993 - 8817.571: 4.6627% ( 95) 00:19:10.941 8817.571 - 8877.149: 5.3902% ( 88) 00:19:10.941 8877.149 - 8936.727: 6.1012% ( 86) 00:19:10.941 8936.727 - 8996.305: 6.8370% ( 89) 00:19:10.941 8996.305 - 9055.884: 7.5728% ( 89) 00:19:10.941 9055.884 - 9115.462: 8.2837% ( 86) 00:19:10.941 9115.462 - 9175.040: 9.0360% ( 91) 00:19:10.941 9175.040 - 9234.618: 9.7884% ( 91) 00:19:10.941 9234.618 - 9294.196: 10.5324% ( 90) 00:19:10.941 9294.196 - 9353.775: 11.3095% ( 94) 00:19:10.941 9353.775 - 9413.353: 12.1197% ( 98) 00:19:10.941 9413.353 - 9472.931: 13.0043% ( 107) 00:19:10.941 9472.931 - 9532.509: 14.1534% ( 139) 00:19:10.941 9532.509 - 9592.087: 15.4183% ( 153) 00:19:10.941 9592.087 - 9651.665: 17.0718% ( 200) 00:19:10.941 9651.665 - 9711.244: 18.8988% ( 221) 00:19:10.941 9711.244 - 9770.822: 20.9243% ( 245) 00:19:10.941 9770.822 - 9830.400: 23.3218% ( 290) 00:19:10.941 9830.400 - 9889.978: 26.0830% ( 334) 00:19:10.941 9889.978 - 9949.556: 29.1501% ( 371) 00:19:10.941 9949.556 - 10009.135: 32.3909% ( 392) 00:19:10.941 10009.135 - 10068.713: 36.0284% ( 440) 00:19:10.941 10068.713 - 10128.291: 39.6660% ( 440) 00:19:10.941 10128.291 - 10187.869: 43.4358% ( 456) 00:19:10.941 10187.869 - 10247.447: 47.2470% ( 461) 00:19:10.941 10247.447 - 10307.025: 51.1657% ( 474) 00:19:10.941 10307.025 - 10366.604: 55.1835% ( 486) 00:19:10.941 10366.604 - 10426.182: 59.0608% ( 469) 00:19:10.941 10426.182 - 10485.760: 63.0291% ( 480) 00:19:10.941 10485.760 - 10545.338: 66.9147% ( 470) 00:19:10.941 10545.338 - 10604.916: 70.6349% ( 450) 00:19:10.941 10604.916 - 10664.495: 74.3056% ( 444) 00:19:10.941 10664.495 - 10724.073: 77.3892% ( 373) 00:19:10.941 10724.073 - 10783.651: 80.2827% ( 350) 00:19:10.941 10783.651 - 10843.229: 82.7464% ( 298) 00:19:10.941 10843.229 - 10902.807: 84.9124% ( 262) 00:19:10.941 10902.807 - 10962.385: 86.7312% ( 220) 00:19:10.941 10962.385 - 11021.964: 88.2937% ( 189) 00:19:10.941 11021.964 - 11081.542: 89.6743% ( 167) 00:19:10.941 11081.542 - 11141.120: 90.7325% ( 128) 00:19:10.941 11141.120 - 11200.698: 91.6088% ( 106) 00:19:10.941 11200.698 - 11260.276: 92.2454% ( 77) 00:19:10.941 11260.276 - 11319.855: 92.7414% ( 60) 00:19:10.941 11319.855 - 11379.433: 93.1630% ( 51) 00:19:10.941 11379.433 - 11439.011: 93.5681% ( 49) 00:19:10.941 11439.011 - 11498.589: 93.9649% ( 48) 00:19:10.941 11498.589 - 11558.167: 94.3039% ( 41) 00:19:10.941 11558.167 - 11617.745: 94.6263% ( 39) 00:19:10.941 11617.745 - 11677.324: 94.8991% ( 33) 00:19:10.941 11677.324 - 11736.902: 95.1306% ( 28) 00:19:10.941 11736.902 - 11796.480: 95.3538% ( 27) 00:19:10.941 11796.480 - 11856.058: 95.5357% ( 22) 00:19:10.941 11856.058 - 11915.636: 95.7507% ( 26) 00:19:10.941 11915.636 - 11975.215: 95.9656% ( 26) 00:19:10.941 11975.215 - 12034.793: 96.2054% ( 29) 00:19:10.941 12034.793 - 12094.371: 96.4368% ( 28) 00:19:10.941 12094.371 - 12153.949: 96.6270% ( 23) 00:19:10.941 12153.949 - 12213.527: 96.7923% ( 20) 00:19:10.941 12213.527 - 12273.105: 96.9659% ( 21) 00:19:10.941 12273.105 - 12332.684: 97.1147% ( 18) 00:19:10.941 12332.684 - 12392.262: 97.2636% ( 18) 00:19:10.941 12392.262 - 12451.840: 97.3793% ( 14) 00:19:10.941 12451.840 - 12511.418: 97.4950% ( 14) 00:19:10.941 12511.418 - 12570.996: 97.6356% ( 17) 00:19:10.941 12570.996 - 12630.575: 97.7927% ( 19) 00:19:10.941 12630.575 - 12690.153: 97.9497% ( 19) 00:19:10.941 12690.153 - 12749.731: 98.0737% ( 15) 00:19:10.941 12749.731 - 12809.309: 98.1895% ( 14) 00:19:10.941 12809.309 - 12868.887: 98.2804% ( 11) 00:19:10.941 12868.887 - 12928.465: 98.3796% ( 12) 00:19:10.941 12928.465 - 12988.044: 98.4540% ( 9) 00:19:10.941 12988.044 - 13047.622: 98.5367% ( 10) 00:19:10.941 13047.622 - 13107.200: 98.6111% ( 9) 00:19:10.941 13107.200 - 13166.778: 98.6690% ( 7) 00:19:10.941 13166.778 - 13226.356: 98.6938% ( 3) 00:19:10.941 13226.356 - 13285.935: 98.7186% ( 3) 00:19:10.941 13285.935 - 13345.513: 98.7434% ( 3) 00:19:10.941 13345.513 - 13405.091: 98.7682% ( 3) 00:19:10.941 13405.091 - 13464.669: 98.7930% ( 3) 00:19:10.941 13464.669 - 13524.247: 98.8178% ( 3) 00:19:10.941 13524.247 - 13583.825: 98.8509% ( 4) 00:19:10.941 13583.825 - 13643.404: 98.8757% ( 3) 00:19:10.941 13643.404 - 13702.982: 98.9005% ( 3) 00:19:10.941 13702.982 - 13762.560: 98.9335% ( 4) 00:19:10.941 13762.560 - 13822.138: 98.9418% ( 1) 00:19:10.941 32172.218 - 32410.531: 98.9501% ( 1) 00:19:10.941 32410.531 - 32648.844: 98.9997% ( 6) 00:19:10.941 32648.844 - 32887.156: 99.0410% ( 5) 00:19:10.941 32887.156 - 33125.469: 99.0906% ( 6) 00:19:10.941 33125.469 - 33363.782: 99.1402% ( 6) 00:19:10.941 33363.782 - 33602.095: 99.1815% ( 5) 00:19:10.941 33602.095 - 33840.407: 99.2229% ( 5) 00:19:10.941 33840.407 - 34078.720: 99.2725% ( 6) 00:19:10.941 34078.720 - 34317.033: 99.3138% ( 5) 00:19:10.941 34317.033 - 34555.345: 99.3634% ( 6) 00:19:10.941 34555.345 - 34793.658: 99.4048% ( 5) 00:19:10.941 34793.658 - 35031.971: 99.4544% ( 6) 00:19:10.941 35031.971 - 35270.284: 99.4709% ( 2) 00:19:10.941 40036.538 - 40274.851: 99.5122% ( 5) 00:19:10.941 40274.851 - 40513.164: 99.5536% ( 5) 00:19:10.941 40513.164 - 40751.476: 99.6032% ( 6) 00:19:10.941 40751.476 - 40989.789: 99.6445% ( 5) 00:19:10.941 40989.789 - 41228.102: 99.6941% ( 6) 00:19:10.941 41228.102 - 41466.415: 99.7354% ( 5) 00:19:10.941 41466.415 - 41704.727: 99.7851% ( 6) 00:19:10.941 41704.727 - 41943.040: 99.8347% ( 6) 00:19:10.941 41943.040 - 42181.353: 99.8843% ( 6) 00:19:10.941 42181.353 - 42419.665: 99.9256% ( 5) 00:19:10.941 42419.665 - 42657.978: 99.9752% ( 6) 00:19:10.941 42657.978 - 42896.291: 100.0000% ( 3) 00:19:10.941 00:19:10.941 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:19:10.941 ============================================================================== 00:19:10.941 Range in us Cumulative IO count 00:19:10.941 7983.476 - 8043.055: 0.0331% ( 4) 00:19:10.941 8043.055 - 8102.633: 0.0661% ( 4) 00:19:10.941 8102.633 - 8162.211: 0.0909% ( 3) 00:19:10.941 8162.211 - 8221.789: 0.1736% ( 10) 00:19:10.941 8221.789 - 8281.367: 0.2563% ( 10) 00:19:10.941 8281.367 - 8340.945: 0.3803% ( 15) 00:19:10.941 8340.945 - 8400.524: 0.6035% ( 27) 00:19:10.941 8400.524 - 8460.102: 0.9011% ( 36) 00:19:10.941 8460.102 - 8519.680: 1.2649% ( 44) 00:19:10.941 8519.680 - 8579.258: 1.8105% ( 66) 00:19:10.941 8579.258 - 8638.836: 2.4471% ( 77) 00:19:10.941 8638.836 - 8698.415: 3.1829% ( 89) 00:19:10.941 8698.415 - 8757.993: 3.9517% ( 93) 00:19:10.941 8757.993 - 8817.571: 4.6958% ( 90) 00:19:10.941 8817.571 - 8877.149: 5.4563% ( 92) 00:19:10.941 8877.149 - 8936.727: 6.2417% ( 95) 00:19:10.941 8936.727 - 8996.305: 6.9527% ( 86) 00:19:10.941 8996.305 - 9055.884: 7.6637% ( 86) 00:19:10.941 9055.884 - 9115.462: 8.4160% ( 91) 00:19:10.941 9115.462 - 9175.040: 9.1518% ( 89) 00:19:10.941 9175.040 - 9234.618: 9.9785% ( 100) 00:19:10.941 9234.618 - 9294.196: 10.7804% ( 97) 00:19:10.941 9294.196 - 9353.775: 11.5658% ( 95) 00:19:10.941 9353.775 - 9413.353: 12.3429% ( 94) 00:19:10.941 9413.353 - 9472.931: 13.2523% ( 110) 00:19:10.941 9472.931 - 9532.509: 14.3932% ( 138) 00:19:10.941 9532.509 - 9592.087: 15.6167% ( 148) 00:19:10.941 9592.087 - 9651.665: 17.0139% ( 169) 00:19:10.941 9651.665 - 9711.244: 18.8079% ( 217) 00:19:10.941 9711.244 - 9770.822: 20.7837% ( 239) 00:19:10.941 9770.822 - 9830.400: 23.0489% ( 274) 00:19:10.941 9830.400 - 9889.978: 25.7523% ( 327) 00:19:10.941 9889.978 - 9949.556: 28.7781% ( 366) 00:19:10.941 9949.556 - 10009.135: 32.1263% ( 405) 00:19:10.941 10009.135 - 10068.713: 35.7226% ( 435) 00:19:10.941 10068.713 - 10128.291: 39.4345% ( 449) 00:19:10.942 10128.291 - 10187.869: 43.1713% ( 452) 00:19:10.942 10187.869 - 10247.447: 46.9163% ( 453) 00:19:10.942 10247.447 - 10307.025: 50.8681% ( 478) 00:19:10.942 10307.025 - 10366.604: 54.8032% ( 476) 00:19:10.942 10366.604 - 10426.182: 58.8294% ( 487) 00:19:10.942 10426.182 - 10485.760: 62.6901% ( 467) 00:19:10.942 10485.760 - 10545.338: 66.6832% ( 483) 00:19:10.942 10545.338 - 10604.916: 70.5522% ( 468) 00:19:10.942 10604.916 - 10664.495: 74.1815% ( 439) 00:19:10.942 10664.495 - 10724.073: 77.6042% ( 414) 00:19:10.942 10724.073 - 10783.651: 80.5721% ( 359) 00:19:10.942 10783.651 - 10843.229: 83.0688% ( 302) 00:19:10.942 10843.229 - 10902.807: 85.1935% ( 257) 00:19:10.942 10902.807 - 10962.385: 86.8882% ( 205) 00:19:10.942 10962.385 - 11021.964: 88.4342% ( 187) 00:19:10.942 11021.964 - 11081.542: 89.7156% ( 155) 00:19:10.942 11081.542 - 11141.120: 90.6829% ( 117) 00:19:10.942 11141.120 - 11200.698: 91.5509% ( 105) 00:19:10.942 11200.698 - 11260.276: 92.2371% ( 83) 00:19:10.942 11260.276 - 11319.855: 92.8075% ( 69) 00:19:10.942 11319.855 - 11379.433: 93.2457% ( 53) 00:19:10.942 11379.433 - 11439.011: 93.6921% ( 54) 00:19:10.942 11439.011 - 11498.589: 94.1055% ( 50) 00:19:10.942 11498.589 - 11558.167: 94.4940% ( 47) 00:19:10.942 11558.167 - 11617.745: 94.7999% ( 37) 00:19:10.942 11617.745 - 11677.324: 95.1306% ( 40) 00:19:10.942 11677.324 - 11736.902: 95.4117% ( 34) 00:19:10.942 11736.902 - 11796.480: 95.6019% ( 23) 00:19:10.942 11796.480 - 11856.058: 95.7755% ( 21) 00:19:10.942 11856.058 - 11915.636: 95.9160% ( 17) 00:19:10.942 11915.636 - 11975.215: 96.1062% ( 23) 00:19:10.942 11975.215 - 12034.793: 96.2963% ( 23) 00:19:10.942 12034.793 - 12094.371: 96.4947% ( 24) 00:19:10.942 12094.371 - 12153.949: 96.6601% ( 20) 00:19:10.942 12153.949 - 12213.527: 96.8254% ( 20) 00:19:10.942 12213.527 - 12273.105: 96.9742% ( 18) 00:19:10.942 12273.105 - 12332.684: 97.1396% ( 20) 00:19:10.942 12332.684 - 12392.262: 97.3049% ( 20) 00:19:10.942 12392.262 - 12451.840: 97.4454% ( 17) 00:19:10.942 12451.840 - 12511.418: 97.5612% ( 14) 00:19:10.942 12511.418 - 12570.996: 97.6769% ( 14) 00:19:10.942 12570.996 - 12630.575: 97.7596% ( 10) 00:19:10.942 12630.575 - 12690.153: 97.8423% ( 10) 00:19:10.942 12690.153 - 12749.731: 97.9332% ( 11) 00:19:10.942 12749.731 - 12809.309: 98.0076% ( 9) 00:19:10.942 12809.309 - 12868.887: 98.1068% ( 12) 00:19:10.942 12868.887 - 12928.465: 98.2143% ( 13) 00:19:10.942 12928.465 - 12988.044: 98.2970% ( 10) 00:19:10.942 12988.044 - 13047.622: 98.3714% ( 9) 00:19:10.942 13047.622 - 13107.200: 98.4788% ( 13) 00:19:10.942 13107.200 - 13166.778: 98.5367% ( 7) 00:19:10.942 13166.778 - 13226.356: 98.5863% ( 6) 00:19:10.942 13226.356 - 13285.935: 98.6194% ( 4) 00:19:10.942 13285.935 - 13345.513: 98.6442% ( 3) 00:19:10.942 13345.513 - 13405.091: 98.6690% ( 3) 00:19:10.942 13405.091 - 13464.669: 98.6938% ( 3) 00:19:10.942 13464.669 - 13524.247: 98.7186% ( 3) 00:19:10.942 13524.247 - 13583.825: 98.7434% ( 3) 00:19:10.942 13583.825 - 13643.404: 98.7682% ( 3) 00:19:10.942 13643.404 - 13702.982: 98.7930% ( 3) 00:19:10.942 13702.982 - 13762.560: 98.8095% ( 2) 00:19:10.942 13762.560 - 13822.138: 98.8343% ( 3) 00:19:10.942 13822.138 - 13881.716: 98.8591% ( 3) 00:19:10.942 13881.716 - 13941.295: 98.8922% ( 4) 00:19:10.942 13941.295 - 14000.873: 98.9170% ( 3) 00:19:10.942 14000.873 - 14060.451: 98.9418% ( 3) 00:19:10.942 29550.778 - 29669.935: 98.9501% ( 1) 00:19:10.942 29669.935 - 29789.091: 98.9749% ( 3) 00:19:10.942 29789.091 - 29908.247: 98.9997% ( 3) 00:19:10.942 29908.247 - 30027.404: 99.0245% ( 3) 00:19:10.942 30027.404 - 30146.560: 99.0410% ( 2) 00:19:10.942 30146.560 - 30265.716: 99.0658% ( 3) 00:19:10.942 30265.716 - 30384.873: 99.0906% ( 3) 00:19:10.942 30384.873 - 30504.029: 99.1154% ( 3) 00:19:10.942 30504.029 - 30742.342: 99.1650% ( 6) 00:19:10.942 30742.342 - 30980.655: 99.2063% ( 5) 00:19:10.942 30980.655 - 31218.967: 99.2560% ( 6) 00:19:10.942 31218.967 - 31457.280: 99.3056% ( 6) 00:19:10.942 31457.280 - 31695.593: 99.3552% ( 6) 00:19:10.942 31695.593 - 31933.905: 99.3965% ( 5) 00:19:10.942 31933.905 - 32172.218: 99.4461% ( 6) 00:19:10.942 32172.218 - 32410.531: 99.4709% ( 3) 00:19:10.942 37176.785 - 37415.098: 99.5040% ( 4) 00:19:10.942 37415.098 - 37653.411: 99.5453% ( 5) 00:19:10.942 37653.411 - 37891.724: 99.5949% ( 6) 00:19:10.942 37891.724 - 38130.036: 99.6445% ( 6) 00:19:10.942 38130.036 - 38368.349: 99.6941% ( 6) 00:19:10.942 38368.349 - 38606.662: 99.7437% ( 6) 00:19:10.942 38606.662 - 38844.975: 99.7933% ( 6) 00:19:10.942 38844.975 - 39083.287: 99.8512% ( 7) 00:19:10.942 39083.287 - 39321.600: 99.9008% ( 6) 00:19:10.942 39321.600 - 39559.913: 99.9587% ( 7) 00:19:10.942 39559.913 - 39798.225: 100.0000% ( 5) 00:19:10.942 00:19:10.942 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:19:10.942 ============================================================================== 00:19:10.942 Range in us Cumulative IO count 00:19:10.942 8043.055 - 8102.633: 0.0413% ( 5) 00:19:10.942 8102.633 - 8162.211: 0.0744% ( 4) 00:19:10.942 8162.211 - 8221.789: 0.1240% ( 6) 00:19:10.942 8221.789 - 8281.367: 0.2149% ( 11) 00:19:10.942 8281.367 - 8340.945: 0.3555% ( 17) 00:19:10.942 8340.945 - 8400.524: 0.6118% ( 31) 00:19:10.942 8400.524 - 8460.102: 0.9342% ( 39) 00:19:10.942 8460.102 - 8519.680: 1.3393% ( 49) 00:19:10.942 8519.680 - 8579.258: 1.8436% ( 61) 00:19:10.942 8579.258 - 8638.836: 2.5132% ( 81) 00:19:10.942 8638.836 - 8698.415: 3.1994% ( 83) 00:19:10.942 8698.415 - 8757.993: 3.9187% ( 87) 00:19:10.942 8757.993 - 8817.571: 4.6875% ( 93) 00:19:10.942 8817.571 - 8877.149: 5.4729% ( 95) 00:19:10.942 8877.149 - 8936.727: 6.2335% ( 92) 00:19:10.942 8936.727 - 8996.305: 7.0437% ( 98) 00:19:10.942 8996.305 - 9055.884: 7.7629% ( 87) 00:19:10.942 9055.884 - 9115.462: 8.5069% ( 90) 00:19:10.942 9115.462 - 9175.040: 9.2758% ( 93) 00:19:10.942 9175.040 - 9234.618: 10.0612% ( 95) 00:19:10.942 9234.618 - 9294.196: 10.8796% ( 99) 00:19:10.942 9294.196 - 9353.775: 11.6154% ( 89) 00:19:10.942 9353.775 - 9413.353: 12.4256% ( 98) 00:19:10.942 9413.353 - 9472.931: 13.3019% ( 106) 00:19:10.942 9472.931 - 9532.509: 14.3767% ( 130) 00:19:10.942 9532.509 - 9592.087: 15.6746% ( 157) 00:19:10.942 9592.087 - 9651.665: 17.1131% ( 174) 00:19:10.942 9651.665 - 9711.244: 18.8244% ( 207) 00:19:10.942 9711.244 - 9770.822: 20.7259% ( 230) 00:19:10.942 9770.822 - 9830.400: 23.1895% ( 298) 00:19:10.942 9830.400 - 9889.978: 25.7854% ( 314) 00:19:10.942 9889.978 - 9949.556: 28.7368% ( 357) 00:19:10.942 9949.556 - 10009.135: 32.0106% ( 396) 00:19:10.942 10009.135 - 10068.713: 35.5489% ( 428) 00:19:10.942 10068.713 - 10128.291: 39.1700% ( 438) 00:19:10.942 10128.291 - 10187.869: 42.9894% ( 462) 00:19:10.942 10187.869 - 10247.447: 46.8089% ( 462) 00:19:10.942 10247.447 - 10307.025: 50.7523% ( 477) 00:19:10.942 10307.025 - 10366.604: 54.6627% ( 473) 00:19:10.942 10366.604 - 10426.182: 58.6806% ( 486) 00:19:10.942 10426.182 - 10485.760: 62.7067% ( 487) 00:19:10.942 10485.760 - 10545.338: 66.6997% ( 483) 00:19:10.942 10545.338 - 10604.916: 70.5605% ( 467) 00:19:10.942 10604.916 - 10664.495: 74.2560% ( 447) 00:19:10.942 10664.495 - 10724.073: 77.5463% ( 398) 00:19:10.942 10724.073 - 10783.651: 80.5638% ( 365) 00:19:10.942 10783.651 - 10843.229: 83.0522% ( 301) 00:19:10.942 10843.229 - 10902.807: 85.1769% ( 257) 00:19:10.942 10902.807 - 10962.385: 86.9130% ( 210) 00:19:10.942 10962.385 - 11021.964: 88.3267% ( 171) 00:19:10.942 11021.964 - 11081.542: 89.5337% ( 146) 00:19:10.942 11081.542 - 11141.120: 90.4927% ( 116) 00:19:10.942 11141.120 - 11200.698: 91.3029% ( 98) 00:19:10.942 11200.698 - 11260.276: 92.0552% ( 91) 00:19:10.942 11260.276 - 11319.855: 92.7001% ( 78) 00:19:10.942 11319.855 - 11379.433: 93.2292% ( 64) 00:19:10.942 11379.433 - 11439.011: 93.7252% ( 60) 00:19:10.943 11439.011 - 11498.589: 94.1799% ( 55) 00:19:10.943 11498.589 - 11558.167: 94.5850% ( 49) 00:19:10.943 11558.167 - 11617.745: 94.8826% ( 36) 00:19:10.943 11617.745 - 11677.324: 95.1472% ( 32) 00:19:10.943 11677.324 - 11736.902: 95.3786% ( 28) 00:19:10.943 11736.902 - 11796.480: 95.5936% ( 26) 00:19:10.943 11796.480 - 11856.058: 95.7672% ( 21) 00:19:10.943 11856.058 - 11915.636: 95.9904% ( 27) 00:19:10.943 11915.636 - 11975.215: 96.1888% ( 24) 00:19:10.943 11975.215 - 12034.793: 96.3707% ( 22) 00:19:10.943 12034.793 - 12094.371: 96.5939% ( 27) 00:19:10.943 12094.371 - 12153.949: 96.7923% ( 24) 00:19:10.943 12153.949 - 12213.527: 96.9329% ( 17) 00:19:10.943 12213.527 - 12273.105: 97.0734% ( 17) 00:19:10.943 12273.105 - 12332.684: 97.1974% ( 15) 00:19:10.943 12332.684 - 12392.262: 97.3214% ( 15) 00:19:10.943 12392.262 - 12451.840: 97.4289% ( 13) 00:19:10.943 12451.840 - 12511.418: 97.5281% ( 12) 00:19:10.943 12511.418 - 12570.996: 97.6273% ( 12) 00:19:10.943 12570.996 - 12630.575: 97.7017% ( 9) 00:19:10.943 12630.575 - 12690.153: 97.7761% ( 9) 00:19:10.943 12690.153 - 12749.731: 97.8671% ( 11) 00:19:10.943 12749.731 - 12809.309: 97.9332% ( 8) 00:19:10.943 12809.309 - 12868.887: 97.9911% ( 7) 00:19:10.943 12868.887 - 12928.465: 98.0489% ( 7) 00:19:10.943 12928.465 - 12988.044: 98.1151% ( 8) 00:19:10.943 12988.044 - 13047.622: 98.1895% ( 9) 00:19:10.943 13047.622 - 13107.200: 98.2722% ( 10) 00:19:10.943 13107.200 - 13166.778: 98.3383% ( 8) 00:19:10.943 13166.778 - 13226.356: 98.3962% ( 7) 00:19:10.943 13226.356 - 13285.935: 98.4623% ( 8) 00:19:10.943 13285.935 - 13345.513: 98.5202% ( 7) 00:19:10.943 13345.513 - 13405.091: 98.5780% ( 7) 00:19:10.943 13405.091 - 13464.669: 98.6276% ( 6) 00:19:10.943 13464.669 - 13524.247: 98.6524% ( 3) 00:19:10.943 13524.247 - 13583.825: 98.6772% ( 3) 00:19:10.943 13583.825 - 13643.404: 98.7021% ( 3) 00:19:10.943 13643.404 - 13702.982: 98.7186% ( 2) 00:19:10.943 13702.982 - 13762.560: 98.7434% ( 3) 00:19:10.943 13762.560 - 13822.138: 98.7682% ( 3) 00:19:10.943 13822.138 - 13881.716: 98.7930% ( 3) 00:19:10.943 13881.716 - 13941.295: 98.8178% ( 3) 00:19:10.943 13941.295 - 14000.873: 98.8509% ( 4) 00:19:10.943 14000.873 - 14060.451: 98.8674% ( 2) 00:19:10.943 14060.451 - 14120.029: 98.9005% ( 4) 00:19:10.943 14120.029 - 14179.607: 98.9253% ( 3) 00:19:10.943 14179.607 - 14239.185: 98.9418% ( 2) 00:19:10.943 26810.182 - 26929.338: 98.9583% ( 2) 00:19:10.943 26929.338 - 27048.495: 98.9914% ( 4) 00:19:10.943 27048.495 - 27167.651: 99.0245% ( 4) 00:19:10.943 27167.651 - 27286.807: 99.0493% ( 3) 00:19:10.943 27286.807 - 27405.964: 99.0741% ( 3) 00:19:10.943 27405.964 - 27525.120: 99.0989% ( 3) 00:19:10.943 27525.120 - 27644.276: 99.1237% ( 3) 00:19:10.943 27644.276 - 27763.433: 99.1402% ( 2) 00:19:10.943 27763.433 - 27882.589: 99.1650% ( 3) 00:19:10.943 27882.589 - 28001.745: 99.1898% ( 3) 00:19:10.943 28001.745 - 28120.902: 99.2146% ( 3) 00:19:10.943 28120.902 - 28240.058: 99.2394% ( 3) 00:19:10.943 28240.058 - 28359.215: 99.2642% ( 3) 00:19:10.943 28359.215 - 28478.371: 99.2890% ( 3) 00:19:10.943 28478.371 - 28597.527: 99.3138% ( 3) 00:19:10.943 28597.527 - 28716.684: 99.3386% ( 3) 00:19:10.943 28716.684 - 28835.840: 99.3634% ( 3) 00:19:10.943 28835.840 - 28954.996: 99.3800% ( 2) 00:19:10.943 28954.996 - 29074.153: 99.3965% ( 2) 00:19:10.943 29074.153 - 29193.309: 99.4213% ( 3) 00:19:10.943 29193.309 - 29312.465: 99.4461% ( 3) 00:19:10.943 29312.465 - 29431.622: 99.4709% ( 3) 00:19:10.943 34317.033 - 34555.345: 99.5205% ( 6) 00:19:10.943 34555.345 - 34793.658: 99.5701% ( 6) 00:19:10.943 34793.658 - 35031.971: 99.6197% ( 6) 00:19:10.943 35031.971 - 35270.284: 99.6693% ( 6) 00:19:10.943 35270.284 - 35508.596: 99.7189% ( 6) 00:19:10.943 35508.596 - 35746.909: 99.7685% ( 6) 00:19:10.943 35746.909 - 35985.222: 99.8181% ( 6) 00:19:10.943 35985.222 - 36223.535: 99.8760% ( 7) 00:19:10.943 36223.535 - 36461.847: 99.9256% ( 6) 00:19:10.943 36461.847 - 36700.160: 99.9752% ( 6) 00:19:10.943 36700.160 - 36938.473: 100.0000% ( 3) 00:19:10.943 00:19:10.943 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:19:10.943 ============================================================================== 00:19:10.943 Range in us Cumulative IO count 00:19:10.943 8102.633 - 8162.211: 0.0165% ( 2) 00:19:10.943 8162.211 - 8221.789: 0.0579% ( 5) 00:19:10.943 8221.789 - 8281.367: 0.1323% ( 9) 00:19:10.943 8281.367 - 8340.945: 0.2315% ( 12) 00:19:10.943 8340.945 - 8400.524: 0.5456% ( 38) 00:19:10.943 8400.524 - 8460.102: 0.9011% ( 43) 00:19:10.943 8460.102 - 8519.680: 1.3310% ( 52) 00:19:10.943 8519.680 - 8579.258: 1.8767% ( 66) 00:19:10.943 8579.258 - 8638.836: 2.5546% ( 82) 00:19:10.943 8638.836 - 8698.415: 3.2573% ( 85) 00:19:10.943 8698.415 - 8757.993: 4.0592% ( 97) 00:19:10.943 8757.993 - 8817.571: 4.8198% ( 92) 00:19:10.943 8817.571 - 8877.149: 5.5142% ( 84) 00:19:10.943 8877.149 - 8936.727: 6.2417% ( 88) 00:19:10.943 8936.727 - 8996.305: 7.0106% ( 93) 00:19:10.943 8996.305 - 9055.884: 7.7960% ( 95) 00:19:10.943 9055.884 - 9115.462: 8.6144% ( 99) 00:19:10.943 9115.462 - 9175.040: 9.3833% ( 93) 00:19:10.943 9175.040 - 9234.618: 10.1604% ( 94) 00:19:10.943 9234.618 - 9294.196: 10.9788% ( 99) 00:19:10.943 9294.196 - 9353.775: 11.6981% ( 87) 00:19:10.943 9353.775 - 9413.353: 12.4917% ( 96) 00:19:10.943 9413.353 - 9472.931: 13.4259% ( 113) 00:19:10.943 9472.931 - 9532.509: 14.5585% ( 137) 00:19:10.943 9532.509 - 9592.087: 15.9640% ( 170) 00:19:10.943 9592.087 - 9651.665: 17.4934% ( 185) 00:19:10.943 9651.665 - 9711.244: 19.1964% ( 206) 00:19:10.943 9711.244 - 9770.822: 21.2798% ( 252) 00:19:10.943 9770.822 - 9830.400: 23.6194% ( 283) 00:19:10.943 9830.400 - 9889.978: 26.2814% ( 322) 00:19:10.943 9889.978 - 9949.556: 29.2163% ( 355) 00:19:10.943 9949.556 - 10009.135: 32.4983% ( 397) 00:19:10.943 10009.135 - 10068.713: 35.9210% ( 414) 00:19:10.943 10068.713 - 10128.291: 39.5420% ( 438) 00:19:10.943 10128.291 - 10187.869: 43.2705% ( 451) 00:19:10.943 10187.869 - 10247.447: 46.9742% ( 448) 00:19:10.943 10247.447 - 10307.025: 50.8267% ( 466) 00:19:10.943 10307.025 - 10366.604: 54.7784% ( 478) 00:19:10.943 10366.604 - 10426.182: 58.7054% ( 475) 00:19:10.943 10426.182 - 10485.760: 62.6571% ( 478) 00:19:10.943 10485.760 - 10545.338: 66.5923% ( 476) 00:19:10.943 10545.338 - 10604.916: 70.4778% ( 470) 00:19:10.943 10604.916 - 10664.495: 74.0493% ( 432) 00:19:10.943 10664.495 - 10724.073: 77.3148% ( 395) 00:19:10.943 10724.073 - 10783.651: 80.1587% ( 344) 00:19:10.943 10783.651 - 10843.229: 82.6389% ( 300) 00:19:10.943 10843.229 - 10902.807: 84.7140% ( 251) 00:19:10.943 10902.807 - 10962.385: 86.4583% ( 211) 00:19:10.943 10962.385 - 11021.964: 87.9216% ( 177) 00:19:10.943 11021.964 - 11081.542: 89.1948% ( 154) 00:19:10.943 11081.542 - 11141.120: 90.2034% ( 122) 00:19:10.943 11141.120 - 11200.698: 91.1624% ( 116) 00:19:10.943 11200.698 - 11260.276: 91.9064% ( 90) 00:19:10.943 11260.276 - 11319.855: 92.5182% ( 74) 00:19:10.943 11319.855 - 11379.433: 93.0721% ( 67) 00:19:10.943 11379.433 - 11439.011: 93.5599% ( 59) 00:19:10.943 11439.011 - 11498.589: 94.0146% ( 55) 00:19:10.943 11498.589 - 11558.167: 94.3700% ( 43) 00:19:10.943 11558.167 - 11617.745: 94.7255% ( 43) 00:19:10.943 11617.745 - 11677.324: 95.0728% ( 42) 00:19:10.943 11677.324 - 11736.902: 95.3456% ( 33) 00:19:10.943 11736.902 - 11796.480: 95.5853% ( 29) 00:19:10.943 11796.480 - 11856.058: 95.8085% ( 27) 00:19:10.943 11856.058 - 11915.636: 96.0317% ( 27) 00:19:10.943 11915.636 - 11975.215: 96.2136% ( 22) 00:19:10.943 11975.215 - 12034.793: 96.3872% ( 21) 00:19:10.944 12034.793 - 12094.371: 96.5443% ( 19) 00:19:10.944 12094.371 - 12153.949: 96.6518% ( 13) 00:19:10.944 12153.949 - 12213.527: 96.7841% ( 16) 00:19:10.944 12213.527 - 12273.105: 96.8998% ( 14) 00:19:10.944 12273.105 - 12332.684: 97.0403% ( 17) 00:19:10.944 12332.684 - 12392.262: 97.1644% ( 15) 00:19:10.944 12392.262 - 12451.840: 97.2801% ( 14) 00:19:10.944 12451.840 - 12511.418: 97.3793% ( 12) 00:19:10.944 12511.418 - 12570.996: 97.4702% ( 11) 00:19:10.944 12570.996 - 12630.575: 97.6025% ( 16) 00:19:10.944 12630.575 - 12690.153: 97.7265% ( 15) 00:19:10.944 12690.153 - 12749.731: 97.8175% ( 11) 00:19:10.944 12749.731 - 12809.309: 97.9001% ( 10) 00:19:10.944 12809.309 - 12868.887: 97.9580% ( 7) 00:19:10.944 12868.887 - 12928.465: 98.0159% ( 7) 00:19:10.944 12928.465 - 12988.044: 98.0737% ( 7) 00:19:10.944 12988.044 - 13047.622: 98.1233% ( 6) 00:19:10.944 13047.622 - 13107.200: 98.1647% ( 5) 00:19:10.944 13107.200 - 13166.778: 98.2060% ( 5) 00:19:10.944 13166.778 - 13226.356: 98.2722% ( 8) 00:19:10.944 13226.356 - 13285.935: 98.3300% ( 7) 00:19:10.944 13285.935 - 13345.513: 98.3796% ( 6) 00:19:10.944 13345.513 - 13405.091: 98.4458% ( 8) 00:19:10.944 13405.091 - 13464.669: 98.5036% ( 7) 00:19:10.944 13464.669 - 13524.247: 98.5615% ( 7) 00:19:10.944 13524.247 - 13583.825: 98.6194% ( 7) 00:19:10.944 13583.825 - 13643.404: 98.6442% ( 3) 00:19:10.944 13643.404 - 13702.982: 98.6690% ( 3) 00:19:10.944 13702.982 - 13762.560: 98.7021% ( 4) 00:19:10.944 13762.560 - 13822.138: 98.7269% ( 3) 00:19:10.944 13822.138 - 13881.716: 98.7517% ( 3) 00:19:10.944 13881.716 - 13941.295: 98.7765% ( 3) 00:19:10.944 13941.295 - 14000.873: 98.8095% ( 4) 00:19:10.944 14000.873 - 14060.451: 98.8343% ( 3) 00:19:10.944 14060.451 - 14120.029: 98.8591% ( 3) 00:19:10.944 14120.029 - 14179.607: 98.8922% ( 4) 00:19:10.944 14179.607 - 14239.185: 98.9170% ( 3) 00:19:10.944 14239.185 - 14298.764: 98.9335% ( 2) 00:19:10.944 14298.764 - 14358.342: 98.9418% ( 1) 00:19:10.944 23831.273 - 23950.429: 98.9501% ( 1) 00:19:10.944 23950.429 - 24069.585: 98.9666% ( 2) 00:19:10.944 24069.585 - 24188.742: 98.9914% ( 3) 00:19:10.944 24188.742 - 24307.898: 99.0162% ( 3) 00:19:10.944 24307.898 - 24427.055: 99.0410% ( 3) 00:19:10.944 24427.055 - 24546.211: 99.0658% ( 3) 00:19:10.944 24546.211 - 24665.367: 99.0823% ( 2) 00:19:10.944 24665.367 - 24784.524: 99.1154% ( 4) 00:19:10.944 24784.524 - 24903.680: 99.1319% ( 2) 00:19:10.944 24903.680 - 25022.836: 99.1567% ( 3) 00:19:10.944 25022.836 - 25141.993: 99.1815% ( 3) 00:19:10.944 25141.993 - 25261.149: 99.1981% ( 2) 00:19:10.944 25261.149 - 25380.305: 99.2229% ( 3) 00:19:10.944 25380.305 - 25499.462: 99.2477% ( 3) 00:19:10.944 25499.462 - 25618.618: 99.2725% ( 3) 00:19:10.944 25618.618 - 25737.775: 99.2973% ( 3) 00:19:10.944 25737.775 - 25856.931: 99.3221% ( 3) 00:19:10.944 25856.931 - 25976.087: 99.3469% ( 3) 00:19:10.944 25976.087 - 26095.244: 99.3717% ( 3) 00:19:10.944 26095.244 - 26214.400: 99.3882% ( 2) 00:19:10.944 26214.400 - 26333.556: 99.4130% ( 3) 00:19:10.944 26333.556 - 26452.713: 99.4378% ( 3) 00:19:10.944 26452.713 - 26571.869: 99.4626% ( 3) 00:19:10.944 26571.869 - 26691.025: 99.4709% ( 1) 00:19:10.944 31218.967 - 31457.280: 99.4957% ( 3) 00:19:10.944 31457.280 - 31695.593: 99.5453% ( 6) 00:19:10.944 31695.593 - 31933.905: 99.5949% ( 6) 00:19:10.944 31933.905 - 32172.218: 99.6528% ( 7) 00:19:10.944 32172.218 - 32410.531: 99.7024% ( 6) 00:19:10.944 32410.531 - 32648.844: 99.7520% ( 6) 00:19:10.944 32648.844 - 32887.156: 99.8016% ( 6) 00:19:10.944 32887.156 - 33125.469: 99.8512% ( 6) 00:19:10.944 33125.469 - 33363.782: 99.9008% ( 6) 00:19:10.944 33363.782 - 33602.095: 99.9587% ( 7) 00:19:10.944 33602.095 - 33840.407: 100.0000% ( 5) 00:19:10.944 00:19:10.944 11:33:16 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:19:12.356 Initializing NVMe Controllers 00:19:12.356 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:12.356 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:19:12.356 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:19:12.356 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:19:12.356 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:19:12.356 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:19:12.356 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:19:12.356 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:19:12.356 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:19:12.356 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:19:12.356 Initialization complete. Launching workers. 00:19:12.356 ======================================================== 00:19:12.356 Latency(us) 00:19:12.356 Device Information : IOPS MiB/s Average min max 00:19:12.356 PCIE (0000:00:10.0) NSID 1 from core 0: 10785.61 126.39 11898.47 9797.63 42202.43 00:19:12.356 PCIE (0000:00:11.0) NSID 1 from core 0: 10785.61 126.39 11875.16 10124.16 39967.15 00:19:12.356 PCIE (0000:00:13.0) NSID 1 from core 0: 10785.61 126.39 11851.44 10130.17 38426.48 00:19:12.356 PCIE (0000:00:12.0) NSID 1 from core 0: 10785.61 126.39 11827.59 9967.41 36142.94 00:19:12.356 PCIE (0000:00:12.0) NSID 2 from core 0: 10849.43 127.14 11734.97 10118.58 27520.12 00:19:12.356 PCIE (0000:00:12.0) NSID 3 from core 0: 10849.43 127.14 11712.23 10126.43 25041.46 00:19:12.356 ======================================================== 00:19:12.356 Total : 64841.28 759.86 11816.46 9797.63 42202.43 00:19:12.356 00:19:12.356 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:19:12.356 ================================================================================= 00:19:12.356 1.00000% : 10247.447us 00:19:12.356 10.00000% : 10664.495us 00:19:12.356 25.00000% : 11081.542us 00:19:12.356 50.00000% : 11558.167us 00:19:12.356 75.00000% : 12094.371us 00:19:12.356 90.00000% : 12690.153us 00:19:12.356 95.00000% : 13166.778us 00:19:12.356 98.00000% : 14358.342us 00:19:12.356 99.00000% : 33363.782us 00:19:12.356 99.50000% : 40274.851us 00:19:12.356 99.90000% : 41943.040us 00:19:12.356 99.99000% : 42181.353us 00:19:12.356 99.99900% : 42419.665us 00:19:12.356 99.99990% : 42419.665us 00:19:12.356 99.99999% : 42419.665us 00:19:12.356 00:19:12.356 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:19:12.356 ================================================================================= 00:19:12.356 1.00000% : 10485.760us 00:19:12.356 10.00000% : 10843.229us 00:19:12.356 25.00000% : 11141.120us 00:19:12.356 50.00000% : 11558.167us 00:19:12.356 75.00000% : 11975.215us 00:19:12.356 90.00000% : 12511.418us 00:19:12.356 95.00000% : 13047.622us 00:19:12.356 98.00000% : 14417.920us 00:19:12.356 99.00000% : 30980.655us 00:19:12.356 99.50000% : 38130.036us 00:19:12.357 99.90000% : 39798.225us 00:19:12.357 99.99000% : 40036.538us 00:19:12.357 99.99900% : 40036.538us 00:19:12.357 99.99990% : 40036.538us 00:19:12.357 99.99999% : 40036.538us 00:19:12.357 00:19:12.357 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:19:12.357 ================================================================================= 00:19:12.357 1.00000% : 10485.760us 00:19:12.357 10.00000% : 10843.229us 00:19:12.357 25.00000% : 11141.120us 00:19:12.357 50.00000% : 11558.167us 00:19:12.357 75.00000% : 11975.215us 00:19:12.357 90.00000% : 12570.996us 00:19:12.357 95.00000% : 13047.622us 00:19:12.357 98.00000% : 14358.342us 00:19:12.357 99.00000% : 29312.465us 00:19:12.357 99.50000% : 36461.847us 00:19:12.357 99.90000% : 38130.036us 00:19:12.357 99.99000% : 38606.662us 00:19:12.357 99.99900% : 38606.662us 00:19:12.357 99.99990% : 38606.662us 00:19:12.357 99.99999% : 38606.662us 00:19:12.357 00:19:12.357 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:19:12.357 ================================================================================= 00:19:12.357 1.00000% : 10426.182us 00:19:12.357 10.00000% : 10843.229us 00:19:12.357 25.00000% : 11141.120us 00:19:12.357 50.00000% : 11558.167us 00:19:12.357 75.00000% : 12034.793us 00:19:12.357 90.00000% : 12570.996us 00:19:12.357 95.00000% : 12988.044us 00:19:12.357 98.00000% : 14417.920us 00:19:12.357 99.00000% : 26691.025us 00:19:12.357 99.50000% : 34317.033us 00:19:12.357 99.90000% : 35985.222us 00:19:12.357 99.99000% : 36223.535us 00:19:12.357 99.99900% : 36223.535us 00:19:12.357 99.99990% : 36223.535us 00:19:12.357 99.99999% : 36223.535us 00:19:12.357 00:19:12.357 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:19:12.357 ================================================================================= 00:19:12.357 1.00000% : 10485.760us 00:19:12.357 10.00000% : 10843.229us 00:19:12.357 25.00000% : 11141.120us 00:19:12.357 50.00000% : 11558.167us 00:19:12.357 75.00000% : 12034.793us 00:19:12.357 90.00000% : 12630.575us 00:19:12.357 95.00000% : 13107.200us 00:19:12.357 98.00000% : 14298.764us 00:19:12.357 99.00000% : 18945.862us 00:19:12.357 99.50000% : 25737.775us 00:19:12.357 99.90000% : 27286.807us 00:19:12.357 99.99000% : 27525.120us 00:19:12.357 99.99900% : 27525.120us 00:19:12.357 99.99990% : 27525.120us 00:19:12.357 99.99999% : 27525.120us 00:19:12.357 00:19:12.357 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:19:12.357 ================================================================================= 00:19:12.357 1.00000% : 10426.182us 00:19:12.357 10.00000% : 10783.651us 00:19:12.357 25.00000% : 11141.120us 00:19:12.357 50.00000% : 11558.167us 00:19:12.357 75.00000% : 12034.793us 00:19:12.357 90.00000% : 12630.575us 00:19:12.357 95.00000% : 13166.778us 00:19:12.357 98.00000% : 14417.920us 00:19:12.357 99.00000% : 16324.422us 00:19:12.357 99.50000% : 23354.647us 00:19:12.357 99.90000% : 24784.524us 00:19:12.357 99.99000% : 25022.836us 00:19:12.357 99.99900% : 25141.993us 00:19:12.357 99.99990% : 25141.993us 00:19:12.357 99.99999% : 25141.993us 00:19:12.357 00:19:12.357 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:19:12.357 ============================================================================== 00:19:12.357 Range in us Cumulative IO count 00:19:12.357 9770.822 - 9830.400: 0.0185% ( 2) 00:19:12.357 9830.400 - 9889.978: 0.0555% ( 4) 00:19:12.357 9889.978 - 9949.556: 0.0832% ( 3) 00:19:12.357 9949.556 - 10009.135: 0.1757% ( 10) 00:19:12.357 10009.135 - 10068.713: 0.2681% ( 10) 00:19:12.357 10068.713 - 10128.291: 0.3883% ( 13) 00:19:12.357 10128.291 - 10187.869: 0.8413% ( 49) 00:19:12.357 10187.869 - 10247.447: 1.4238% ( 63) 00:19:12.357 10247.447 - 10307.025: 2.0987% ( 73) 00:19:12.357 10307.025 - 10366.604: 2.8846% ( 85) 00:19:12.357 10366.604 - 10426.182: 3.9848% ( 119) 00:19:12.357 10426.182 - 10485.760: 5.6583% ( 181) 00:19:12.357 10485.760 - 10545.338: 7.1838% ( 165) 00:19:12.357 10545.338 - 10604.916: 8.4782% ( 140) 00:19:12.357 10604.916 - 10664.495: 10.0222% ( 167) 00:19:12.357 10664.495 - 10724.073: 11.6679% ( 178) 00:19:12.357 10724.073 - 10783.651: 13.8314% ( 234) 00:19:12.357 10783.651 - 10843.229: 16.2999% ( 267) 00:19:12.357 10843.229 - 10902.807: 18.8517% ( 276) 00:19:12.357 10902.807 - 10962.385: 21.5422% ( 291) 00:19:12.357 10962.385 - 11021.964: 24.7226% ( 344) 00:19:12.357 11021.964 - 11081.542: 27.5425% ( 305) 00:19:12.357 11081.542 - 11141.120: 30.1960% ( 287) 00:19:12.357 11141.120 - 11200.698: 32.7478% ( 276) 00:19:12.357 11200.698 - 11260.276: 35.7433% ( 324) 00:19:12.357 11260.276 - 11319.855: 38.8221% ( 333) 00:19:12.357 11319.855 - 11379.433: 42.1505% ( 360) 00:19:12.357 11379.433 - 11439.011: 45.3217% ( 343) 00:19:12.357 11439.011 - 11498.589: 48.2341% ( 315) 00:19:12.357 11498.589 - 11558.167: 51.2204% ( 323) 00:19:12.357 11558.167 - 11617.745: 54.6135% ( 367) 00:19:12.357 11617.745 - 11677.324: 57.8495% ( 350) 00:19:12.357 11677.324 - 11736.902: 60.8173% ( 321) 00:19:12.357 11736.902 - 11796.480: 63.7297% ( 315) 00:19:12.357 11796.480 - 11856.058: 66.5958% ( 310) 00:19:12.357 11856.058 - 11915.636: 69.4712% ( 311) 00:19:12.357 11915.636 - 11975.215: 72.1616% ( 291) 00:19:12.357 11975.215 - 12034.793: 74.5562% ( 259) 00:19:12.357 12034.793 - 12094.371: 76.6827% ( 230) 00:19:12.357 12094.371 - 12153.949: 78.6982% ( 218) 00:19:12.357 12153.949 - 12213.527: 80.6213% ( 208) 00:19:12.357 12213.527 - 12273.105: 82.4797% ( 201) 00:19:12.357 12273.105 - 12332.684: 84.0699% ( 172) 00:19:12.357 12332.684 - 12392.262: 85.5399% ( 159) 00:19:12.357 12392.262 - 12451.840: 86.7511% ( 131) 00:19:12.357 12451.840 - 12511.418: 87.8883% ( 123) 00:19:12.357 12511.418 - 12570.996: 88.9146% ( 111) 00:19:12.357 12570.996 - 12630.575: 89.8206% ( 98) 00:19:12.357 12630.575 - 12690.153: 90.5603% ( 80) 00:19:12.357 12690.153 - 12749.731: 91.3369% ( 84) 00:19:12.357 12749.731 - 12809.309: 92.0396% ( 76) 00:19:12.357 12809.309 - 12868.887: 92.7145% ( 73) 00:19:12.357 12868.887 - 12928.465: 93.3339% ( 67) 00:19:12.357 12928.465 - 12988.044: 94.0089% ( 73) 00:19:12.357 12988.044 - 13047.622: 94.5451% ( 58) 00:19:12.357 13047.622 - 13107.200: 94.9242% ( 41) 00:19:12.357 13107.200 - 13166.778: 95.3033% ( 41) 00:19:12.357 13166.778 - 13226.356: 95.5344% ( 25) 00:19:12.357 13226.356 - 13285.935: 95.7470% ( 23) 00:19:12.357 13285.935 - 13345.513: 95.8765% ( 14) 00:19:12.357 13345.513 - 13405.091: 96.0244% ( 16) 00:19:12.357 13405.091 - 13464.669: 96.1538% ( 14) 00:19:12.357 13464.669 - 13524.247: 96.3203% ( 18) 00:19:12.357 13524.247 - 13583.825: 96.4682% ( 16) 00:19:12.357 13583.825 - 13643.404: 96.5884% ( 13) 00:19:12.357 13643.404 - 13702.982: 96.7456% ( 17) 00:19:12.357 13702.982 - 13762.560: 96.8473% ( 11) 00:19:12.357 13762.560 - 13822.138: 96.9397% ( 10) 00:19:12.357 13822.138 - 13881.716: 97.0599% ( 13) 00:19:12.357 13881.716 - 13941.295: 97.2633% ( 22) 00:19:12.357 13941.295 - 14000.873: 97.4482% ( 20) 00:19:12.357 14000.873 - 14060.451: 97.6054% ( 17) 00:19:12.357 14060.451 - 14120.029: 97.7071% ( 11) 00:19:12.357 14120.029 - 14179.607: 97.8088% ( 11) 00:19:12.357 14179.607 - 14239.185: 97.8920% ( 9) 00:19:12.357 14239.185 - 14298.764: 97.9567% ( 7) 00:19:12.357 14298.764 - 14358.342: 98.0030% ( 5) 00:19:12.357 14358.342 - 14417.920: 98.0677% ( 7) 00:19:12.357 14417.920 - 14477.498: 98.1601% ( 10) 00:19:12.357 14477.498 - 14537.076: 98.1971% ( 4) 00:19:12.357 14537.076 - 14596.655: 98.2711% ( 8) 00:19:12.357 14596.655 - 14656.233: 98.3266% ( 6) 00:19:12.357 14656.233 - 14715.811: 98.4190% ( 10) 00:19:12.357 14715.811 - 14775.389: 98.4745% ( 6) 00:19:12.357 14775.389 - 14834.967: 98.5300% ( 6) 00:19:12.357 14834.967 - 14894.545: 98.5947% ( 7) 00:19:12.357 14894.545 - 14954.124: 98.6501% ( 6) 00:19:12.357 14954.124 - 15013.702: 98.7241% ( 8) 00:19:12.357 15013.702 - 15073.280: 98.7611% ( 4) 00:19:12.357 15073.280 - 15132.858: 98.7796% ( 2) 00:19:12.357 15132.858 - 15192.436: 98.8073% ( 3) 00:19:12.357 15192.436 - 15252.015: 98.8166% ( 1) 00:19:12.357 32410.531 - 32648.844: 98.8443% ( 3) 00:19:12.357 32648.844 - 32887.156: 98.8998% ( 6) 00:19:12.357 32887.156 - 33125.469: 98.9645% ( 7) 00:19:12.357 33125.469 - 33363.782: 99.0292% ( 7) 00:19:12.357 33363.782 - 33602.095: 99.0754% ( 5) 00:19:12.357 33602.095 - 33840.407: 99.1494% ( 8) 00:19:12.357 33840.407 - 34078.720: 99.1771% ( 3) 00:19:12.357 34078.720 - 34317.033: 99.2511% ( 8) 00:19:12.357 34317.033 - 34555.345: 99.3158% ( 7) 00:19:12.357 34555.345 - 34793.658: 99.3805% ( 7) 00:19:12.357 34793.658 - 35031.971: 99.4083% ( 3) 00:19:12.357 39798.225 - 40036.538: 99.4453% ( 4) 00:19:12.357 40036.538 - 40274.851: 99.5007% ( 6) 00:19:12.357 40274.851 - 40513.164: 99.5655% ( 7) 00:19:12.357 40513.164 - 40751.476: 99.6209% ( 6) 00:19:12.357 40751.476 - 40989.789: 99.6857% ( 7) 00:19:12.357 40989.789 - 41228.102: 99.7411% ( 6) 00:19:12.357 41228.102 - 41466.415: 99.8151% ( 8) 00:19:12.357 41466.415 - 41704.727: 99.8798% ( 7) 00:19:12.357 41704.727 - 41943.040: 99.9353% ( 6) 00:19:12.357 41943.040 - 42181.353: 99.9908% ( 6) 00:19:12.357 42181.353 - 42419.665: 100.0000% ( 1) 00:19:12.357 00:19:12.357 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:19:12.358 ============================================================================== 00:19:12.358 Range in us Cumulative IO count 00:19:12.358 10068.713 - 10128.291: 0.0185% ( 2) 00:19:12.358 10128.291 - 10187.869: 0.0277% ( 1) 00:19:12.358 10187.869 - 10247.447: 0.1387% ( 12) 00:19:12.358 10247.447 - 10307.025: 0.2219% ( 9) 00:19:12.358 10307.025 - 10366.604: 0.5178% ( 32) 00:19:12.358 10366.604 - 10426.182: 0.8691% ( 38) 00:19:12.358 10426.182 - 10485.760: 1.6272% ( 82) 00:19:12.358 10485.760 - 10545.338: 2.4593% ( 90) 00:19:12.358 10545.338 - 10604.916: 3.6428% ( 128) 00:19:12.358 10604.916 - 10664.495: 5.0943% ( 157) 00:19:12.358 10664.495 - 10724.073: 6.6568% ( 169) 00:19:12.358 10724.073 - 10783.651: 8.6261% ( 213) 00:19:12.358 10783.651 - 10843.229: 11.1132% ( 269) 00:19:12.358 10843.229 - 10902.807: 13.6002% ( 269) 00:19:12.358 10902.807 - 10962.385: 16.5865% ( 323) 00:19:12.358 10962.385 - 11021.964: 19.7023% ( 337) 00:19:12.358 11021.964 - 11081.542: 22.8828% ( 344) 00:19:12.358 11081.542 - 11141.120: 26.0725% ( 345) 00:19:12.358 11141.120 - 11200.698: 29.8077% ( 404) 00:19:12.358 11200.698 - 11260.276: 34.1069% ( 465) 00:19:12.358 11260.276 - 11319.855: 37.8976% ( 410) 00:19:12.358 11319.855 - 11379.433: 42.1413% ( 459) 00:19:12.358 11379.433 - 11439.011: 45.6638% ( 381) 00:19:12.358 11439.011 - 11498.589: 49.8151% ( 449) 00:19:12.358 11498.589 - 11558.167: 53.4856% ( 397) 00:19:12.358 11558.167 - 11617.745: 57.3687% ( 420) 00:19:12.358 11617.745 - 11677.324: 60.9930% ( 392) 00:19:12.358 11677.324 - 11736.902: 64.2104% ( 348) 00:19:12.358 11736.902 - 11796.480: 67.0673% ( 309) 00:19:12.358 11796.480 - 11856.058: 70.1831% ( 337) 00:19:12.358 11856.058 - 11915.636: 73.3173% ( 339) 00:19:12.358 11915.636 - 11975.215: 76.0355% ( 294) 00:19:12.358 11975.215 - 12034.793: 78.2082% ( 235) 00:19:12.358 12034.793 - 12094.371: 80.2885% ( 225) 00:19:12.358 12094.371 - 12153.949: 82.4057% ( 229) 00:19:12.358 12153.949 - 12213.527: 84.1624% ( 190) 00:19:12.358 12213.527 - 12273.105: 85.5492% ( 150) 00:19:12.358 12273.105 - 12332.684: 86.8436% ( 140) 00:19:12.358 12332.684 - 12392.262: 88.1010% ( 136) 00:19:12.358 12392.262 - 12451.840: 89.1457% ( 113) 00:19:12.358 12451.840 - 12511.418: 90.1072% ( 104) 00:19:12.358 12511.418 - 12570.996: 90.9578% ( 92) 00:19:12.358 12570.996 - 12630.575: 91.5865% ( 68) 00:19:12.358 12630.575 - 12690.153: 92.1320% ( 59) 00:19:12.358 12690.153 - 12749.731: 92.7515% ( 67) 00:19:12.358 12749.731 - 12809.309: 93.4264% ( 73) 00:19:12.358 12809.309 - 12868.887: 93.9349% ( 55) 00:19:12.358 12868.887 - 12928.465: 94.3602% ( 46) 00:19:12.358 12928.465 - 12988.044: 94.6561% ( 32) 00:19:12.358 12988.044 - 13047.622: 95.1276% ( 51) 00:19:12.358 13047.622 - 13107.200: 95.3402% ( 23) 00:19:12.358 13107.200 - 13166.778: 95.5251% ( 20) 00:19:12.358 13166.778 - 13226.356: 95.7008% ( 19) 00:19:12.358 13226.356 - 13285.935: 95.9412% ( 26) 00:19:12.358 13285.935 - 13345.513: 96.0152% ( 8) 00:19:12.358 13345.513 - 13405.091: 96.0984% ( 9) 00:19:12.358 13405.091 - 13464.669: 96.2001% ( 11) 00:19:12.358 13464.669 - 13524.247: 96.2833% ( 9) 00:19:12.358 13524.247 - 13583.825: 96.3665% ( 9) 00:19:12.358 13583.825 - 13643.404: 96.4867% ( 13) 00:19:12.358 13643.404 - 13702.982: 96.5884% ( 11) 00:19:12.358 13702.982 - 13762.560: 96.6531% ( 7) 00:19:12.358 13762.560 - 13822.138: 96.7271% ( 8) 00:19:12.358 13822.138 - 13881.716: 96.8010% ( 8) 00:19:12.358 13881.716 - 13941.295: 96.8842% ( 9) 00:19:12.358 13941.295 - 14000.873: 96.9952% ( 12) 00:19:12.358 14000.873 - 14060.451: 97.1154% ( 13) 00:19:12.358 14060.451 - 14120.029: 97.2171% ( 11) 00:19:12.358 14120.029 - 14179.607: 97.3650% ( 16) 00:19:12.358 14179.607 - 14239.185: 97.5407% ( 19) 00:19:12.358 14239.185 - 14298.764: 97.7441% ( 22) 00:19:12.358 14298.764 - 14358.342: 97.9105% ( 18) 00:19:12.358 14358.342 - 14417.920: 98.0492% ( 15) 00:19:12.358 14417.920 - 14477.498: 98.2896% ( 26) 00:19:12.358 14477.498 - 14537.076: 98.3820% ( 10) 00:19:12.358 14537.076 - 14596.655: 98.4745% ( 10) 00:19:12.358 14596.655 - 14656.233: 98.5577% ( 9) 00:19:12.358 14656.233 - 14715.811: 98.6501% ( 10) 00:19:12.358 14715.811 - 14775.389: 98.7241% ( 8) 00:19:12.358 14775.389 - 14834.967: 98.7703% ( 5) 00:19:12.358 14834.967 - 14894.545: 98.7981% ( 3) 00:19:12.358 14894.545 - 14954.124: 98.8166% ( 2) 00:19:12.358 30146.560 - 30265.716: 98.8351% ( 2) 00:19:12.358 30265.716 - 30384.873: 98.8628% ( 3) 00:19:12.358 30384.873 - 30504.029: 98.8905% ( 3) 00:19:12.358 30504.029 - 30742.342: 98.9553% ( 7) 00:19:12.358 30742.342 - 30980.655: 99.0292% ( 8) 00:19:12.358 30980.655 - 31218.967: 99.0939% ( 7) 00:19:12.358 31218.967 - 31457.280: 99.1587% ( 7) 00:19:12.358 31457.280 - 31695.593: 99.2234% ( 7) 00:19:12.358 31695.593 - 31933.905: 99.2881% ( 7) 00:19:12.358 31933.905 - 32172.218: 99.3528% ( 7) 00:19:12.358 32172.218 - 32410.531: 99.4083% ( 6) 00:19:12.358 37415.098 - 37653.411: 99.4175% ( 1) 00:19:12.358 37653.411 - 37891.724: 99.4730% ( 6) 00:19:12.358 37891.724 - 38130.036: 99.5377% ( 7) 00:19:12.358 38130.036 - 38368.349: 99.5839% ( 5) 00:19:12.358 38368.349 - 38606.662: 99.6487% ( 7) 00:19:12.358 38606.662 - 38844.975: 99.7041% ( 6) 00:19:12.358 38844.975 - 39083.287: 99.7596% ( 6) 00:19:12.358 39083.287 - 39321.600: 99.8243% ( 7) 00:19:12.358 39321.600 - 39559.913: 99.8891% ( 7) 00:19:12.358 39559.913 - 39798.225: 99.9445% ( 6) 00:19:12.358 39798.225 - 40036.538: 100.0000% ( 6) 00:19:12.358 00:19:12.358 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:19:12.358 ============================================================================== 00:19:12.358 Range in us Cumulative IO count 00:19:12.358 10128.291 - 10187.869: 0.0555% ( 6) 00:19:12.358 10187.869 - 10247.447: 0.2219% ( 18) 00:19:12.358 10247.447 - 10307.025: 0.4253% ( 22) 00:19:12.358 10307.025 - 10366.604: 0.6287% ( 22) 00:19:12.358 10366.604 - 10426.182: 0.9430% ( 34) 00:19:12.358 10426.182 - 10485.760: 1.6642% ( 78) 00:19:12.358 10485.760 - 10545.338: 2.7089% ( 113) 00:19:12.358 10545.338 - 10604.916: 4.0865% ( 149) 00:19:12.358 10604.916 - 10664.495: 5.5473% ( 158) 00:19:12.358 10664.495 - 10724.073: 7.2115% ( 180) 00:19:12.358 10724.073 - 10783.651: 9.5692% ( 255) 00:19:12.358 10783.651 - 10843.229: 12.2134% ( 286) 00:19:12.358 10843.229 - 10902.807: 14.7652% ( 276) 00:19:12.358 10902.807 - 10962.385: 17.2522% ( 269) 00:19:12.358 10962.385 - 11021.964: 20.1461% ( 313) 00:19:12.358 11021.964 - 11081.542: 23.3635% ( 348) 00:19:12.358 11081.542 - 11141.120: 26.6457% ( 355) 00:19:12.358 11141.120 - 11200.698: 30.5381% ( 421) 00:19:12.358 11200.698 - 11260.276: 34.4397% ( 422) 00:19:12.358 11260.276 - 11319.855: 38.4523% ( 434) 00:19:12.358 11319.855 - 11379.433: 42.2984% ( 416) 00:19:12.358 11379.433 - 11439.011: 46.1261% ( 414) 00:19:12.358 11439.011 - 11498.589: 49.9815% ( 417) 00:19:12.358 11498.589 - 11558.167: 53.7907% ( 412) 00:19:12.358 11558.167 - 11617.745: 57.0544% ( 353) 00:19:12.358 11617.745 - 11677.324: 60.1424% ( 334) 00:19:12.358 11677.324 - 11736.902: 63.7019% ( 385) 00:19:12.358 11736.902 - 11796.480: 67.2800% ( 387) 00:19:12.358 11796.480 - 11856.058: 70.2385% ( 320) 00:19:12.358 11856.058 - 11915.636: 72.8643% ( 284) 00:19:12.358 11915.636 - 11975.215: 75.5825% ( 294) 00:19:12.358 11975.215 - 12034.793: 77.8014% ( 240) 00:19:12.358 12034.793 - 12094.371: 79.8354% ( 220) 00:19:12.358 12094.371 - 12153.949: 81.8047% ( 213) 00:19:12.358 12153.949 - 12213.527: 83.5244% ( 186) 00:19:12.358 12213.527 - 12273.105: 85.1886% ( 180) 00:19:12.358 12273.105 - 12332.684: 86.5385% ( 146) 00:19:12.358 12332.684 - 12392.262: 87.6479% ( 120) 00:19:12.358 12392.262 - 12451.840: 88.6742% ( 111) 00:19:12.358 12451.840 - 12511.418: 89.5803% ( 98) 00:19:12.358 12511.418 - 12570.996: 90.4678% ( 96) 00:19:12.358 12570.996 - 12630.575: 91.1982% ( 79) 00:19:12.358 12630.575 - 12690.153: 91.9101% ( 77) 00:19:12.358 12690.153 - 12749.731: 92.4926% ( 63) 00:19:12.358 12749.731 - 12809.309: 93.0381% ( 59) 00:19:12.358 12809.309 - 12868.887: 93.5096% ( 51) 00:19:12.358 12868.887 - 12928.465: 94.3695% ( 93) 00:19:12.358 12928.465 - 12988.044: 94.7947% ( 46) 00:19:12.358 12988.044 - 13047.622: 95.1461% ( 38) 00:19:12.358 13047.622 - 13107.200: 95.4604% ( 34) 00:19:12.358 13107.200 - 13166.778: 95.7193% ( 28) 00:19:12.358 13166.778 - 13226.356: 95.9782% ( 28) 00:19:12.358 13226.356 - 13285.935: 96.1723% ( 21) 00:19:12.358 13285.935 - 13345.513: 96.3480% ( 19) 00:19:12.358 13345.513 - 13405.091: 96.4682% ( 13) 00:19:12.358 13405.091 - 13464.669: 96.5791% ( 12) 00:19:12.358 13464.669 - 13524.247: 96.6808% ( 11) 00:19:12.358 13524.247 - 13583.825: 96.7825% ( 11) 00:19:12.358 13583.825 - 13643.404: 96.8842% ( 11) 00:19:12.358 13643.404 - 13702.982: 97.0137% ( 14) 00:19:12.358 13702.982 - 13762.560: 97.1154% ( 11) 00:19:12.358 13762.560 - 13822.138: 97.1893% ( 8) 00:19:12.358 13822.138 - 13881.716: 97.2818% ( 10) 00:19:12.358 13881.716 - 13941.295: 97.3650% ( 9) 00:19:12.358 13941.295 - 14000.873: 97.4667% ( 11) 00:19:12.358 14000.873 - 14060.451: 97.5499% ( 9) 00:19:12.358 14060.451 - 14120.029: 97.6424% ( 10) 00:19:12.358 14120.029 - 14179.607: 97.7256% ( 9) 00:19:12.358 14179.607 - 14239.185: 97.8088% ( 9) 00:19:12.358 14239.185 - 14298.764: 97.9290% ( 13) 00:19:12.358 14298.764 - 14358.342: 98.0030% ( 8) 00:19:12.358 14358.342 - 14417.920: 98.0954% ( 10) 00:19:12.358 14417.920 - 14477.498: 98.1971% ( 11) 00:19:12.358 14477.498 - 14537.076: 98.2803% ( 9) 00:19:12.358 14537.076 - 14596.655: 98.3728% ( 10) 00:19:12.358 14596.655 - 14656.233: 98.4375% ( 7) 00:19:12.358 14656.233 - 14715.811: 98.5115% ( 8) 00:19:12.359 14715.811 - 14775.389: 98.5669% ( 6) 00:19:12.359 14775.389 - 14834.967: 98.6039% ( 4) 00:19:12.359 14834.967 - 14894.545: 98.6317% ( 3) 00:19:12.359 14894.545 - 14954.124: 98.6501% ( 2) 00:19:12.359 14954.124 - 15013.702: 98.6779% ( 3) 00:19:12.359 15013.702 - 15073.280: 98.6964% ( 2) 00:19:12.359 15073.280 - 15132.858: 98.7149% ( 2) 00:19:12.359 15132.858 - 15192.436: 98.7426% ( 3) 00:19:12.359 15192.436 - 15252.015: 98.7611% ( 2) 00:19:12.359 15252.015 - 15371.171: 98.7981% ( 4) 00:19:12.359 15371.171 - 15490.327: 98.8166% ( 2) 00:19:12.359 28478.371 - 28597.527: 98.8351% ( 2) 00:19:12.359 28597.527 - 28716.684: 98.8628% ( 3) 00:19:12.359 28716.684 - 28835.840: 98.8998% ( 4) 00:19:12.359 28835.840 - 28954.996: 98.9275% ( 3) 00:19:12.359 28954.996 - 29074.153: 98.9553% ( 3) 00:19:12.359 29074.153 - 29193.309: 98.9922% ( 4) 00:19:12.359 29193.309 - 29312.465: 99.0200% ( 3) 00:19:12.359 29312.465 - 29431.622: 99.0570% ( 4) 00:19:12.359 29431.622 - 29550.778: 99.0939% ( 4) 00:19:12.359 29550.778 - 29669.935: 99.1217% ( 3) 00:19:12.359 29669.935 - 29789.091: 99.1587% ( 4) 00:19:12.359 29789.091 - 29908.247: 99.1956% ( 4) 00:19:12.359 29908.247 - 30027.404: 99.2234% ( 3) 00:19:12.359 30027.404 - 30146.560: 99.2604% ( 4) 00:19:12.359 30146.560 - 30265.716: 99.2881% ( 3) 00:19:12.359 30265.716 - 30384.873: 99.3158% ( 3) 00:19:12.359 30384.873 - 30504.029: 99.3528% ( 4) 00:19:12.359 30504.029 - 30742.342: 99.4083% ( 6) 00:19:12.359 35985.222 - 36223.535: 99.4545% ( 5) 00:19:12.359 36223.535 - 36461.847: 99.5100% ( 6) 00:19:12.359 36461.847 - 36700.160: 99.5655% ( 6) 00:19:12.359 36700.160 - 36938.473: 99.6209% ( 6) 00:19:12.359 36938.473 - 37176.785: 99.6764% ( 6) 00:19:12.359 37176.785 - 37415.098: 99.7411% ( 7) 00:19:12.359 37415.098 - 37653.411: 99.7966% ( 6) 00:19:12.359 37653.411 - 37891.724: 99.8613% ( 7) 00:19:12.359 37891.724 - 38130.036: 99.9260% ( 7) 00:19:12.359 38130.036 - 38368.349: 99.9815% ( 6) 00:19:12.359 38368.349 - 38606.662: 100.0000% ( 2) 00:19:12.359 00:19:12.359 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:19:12.359 ============================================================================== 00:19:12.359 Range in us Cumulative IO count 00:19:12.359 9949.556 - 10009.135: 0.0092% ( 1) 00:19:12.359 10068.713 - 10128.291: 0.0185% ( 1) 00:19:12.359 10128.291 - 10187.869: 0.0462% ( 3) 00:19:12.359 10187.869 - 10247.447: 0.1572% ( 12) 00:19:12.359 10247.447 - 10307.025: 0.2681% ( 12) 00:19:12.359 10307.025 - 10366.604: 0.5362% ( 29) 00:19:12.359 10366.604 - 10426.182: 1.0170% ( 52) 00:19:12.359 10426.182 - 10485.760: 1.6734% ( 71) 00:19:12.359 10485.760 - 10545.338: 2.5518% ( 95) 00:19:12.359 10545.338 - 10604.916: 3.7999% ( 135) 00:19:12.359 10604.916 - 10664.495: 5.2145% ( 153) 00:19:12.359 10664.495 - 10724.073: 7.1006% ( 204) 00:19:12.359 10724.073 - 10783.651: 9.1901% ( 226) 00:19:12.359 10783.651 - 10843.229: 11.5292% ( 253) 00:19:12.359 10843.229 - 10902.807: 13.9793% ( 265) 00:19:12.359 10902.807 - 10962.385: 16.6328% ( 287) 00:19:12.359 10962.385 - 11021.964: 20.0074% ( 365) 00:19:12.359 11021.964 - 11081.542: 23.3543% ( 362) 00:19:12.359 11081.542 - 11141.120: 26.9693% ( 391) 00:19:12.359 11141.120 - 11200.698: 30.8709% ( 422) 00:19:12.359 11200.698 - 11260.276: 34.7171% ( 416) 00:19:12.359 11260.276 - 11319.855: 38.6464% ( 425) 00:19:12.359 11319.855 - 11379.433: 42.4371% ( 410) 00:19:12.359 11379.433 - 11439.011: 46.0799% ( 394) 00:19:12.359 11439.011 - 11498.589: 49.8428% ( 407) 00:19:12.359 11498.589 - 11558.167: 53.0695% ( 349) 00:19:12.359 11558.167 - 11617.745: 56.0281% ( 320) 00:19:12.359 11617.745 - 11677.324: 59.4675% ( 372) 00:19:12.359 11677.324 - 11736.902: 63.0917% ( 392) 00:19:12.359 11736.902 - 11796.480: 66.2537% ( 342) 00:19:12.359 11796.480 - 11856.058: 69.3232% ( 332) 00:19:12.359 11856.058 - 11915.636: 72.3095% ( 323) 00:19:12.359 11915.636 - 11975.215: 74.8428% ( 274) 00:19:12.359 11975.215 - 12034.793: 77.3576% ( 272) 00:19:12.359 12034.793 - 12094.371: 79.6413% ( 247) 00:19:12.359 12094.371 - 12153.949: 81.4719% ( 198) 00:19:12.359 12153.949 - 12213.527: 83.1546% ( 182) 00:19:12.359 12213.527 - 12273.105: 84.8095% ( 179) 00:19:12.359 12273.105 - 12332.684: 86.2426% ( 155) 00:19:12.359 12332.684 - 12392.262: 87.4723% ( 133) 00:19:12.359 12392.262 - 12451.840: 88.6002% ( 122) 00:19:12.359 12451.840 - 12511.418: 89.8206% ( 132) 00:19:12.359 12511.418 - 12570.996: 90.7914% ( 105) 00:19:12.359 12570.996 - 12630.575: 91.4941% ( 76) 00:19:12.359 12630.575 - 12690.153: 92.3169% ( 89) 00:19:12.359 12690.153 - 12749.731: 93.0751% ( 82) 00:19:12.359 12749.731 - 12809.309: 93.8979% ( 89) 00:19:12.359 12809.309 - 12868.887: 94.3602% ( 50) 00:19:12.359 12868.887 - 12928.465: 94.7300% ( 40) 00:19:12.359 12928.465 - 12988.044: 95.1368% ( 44) 00:19:12.359 12988.044 - 13047.622: 95.5344% ( 43) 00:19:12.359 13047.622 - 13107.200: 95.7840% ( 27) 00:19:12.359 13107.200 - 13166.778: 95.9597% ( 19) 00:19:12.359 13166.778 - 13226.356: 96.0984% ( 15) 00:19:12.359 13226.356 - 13285.935: 96.2740% ( 19) 00:19:12.359 13285.935 - 13345.513: 96.4035% ( 14) 00:19:12.359 13345.513 - 13405.091: 96.5607% ( 17) 00:19:12.359 13405.091 - 13464.669: 96.6808% ( 13) 00:19:12.359 13464.669 - 13524.247: 96.8380% ( 17) 00:19:12.359 13524.247 - 13583.825: 96.9859% ( 16) 00:19:12.359 13583.825 - 13643.404: 97.0507% ( 7) 00:19:12.359 13643.404 - 13702.982: 97.1246% ( 8) 00:19:12.359 13702.982 - 13762.560: 97.1616% ( 4) 00:19:12.359 13762.560 - 13822.138: 97.2263% ( 7) 00:19:12.359 13822.138 - 13881.716: 97.2818% ( 6) 00:19:12.359 13881.716 - 13941.295: 97.3373% ( 6) 00:19:12.359 13941.295 - 14000.873: 97.3835% ( 5) 00:19:12.359 14000.873 - 14060.451: 97.5037% ( 13) 00:19:12.359 14060.451 - 14120.029: 97.6054% ( 11) 00:19:12.359 14120.029 - 14179.607: 97.7163% ( 12) 00:19:12.359 14179.607 - 14239.185: 97.7718% ( 6) 00:19:12.359 14239.185 - 14298.764: 97.8550% ( 9) 00:19:12.359 14298.764 - 14358.342: 97.9845% ( 14) 00:19:12.359 14358.342 - 14417.920: 98.0492% ( 7) 00:19:12.359 14417.920 - 14477.498: 98.0954% ( 5) 00:19:12.359 14477.498 - 14537.076: 98.1324% ( 4) 00:19:12.359 14537.076 - 14596.655: 98.1509% ( 2) 00:19:12.359 14596.655 - 14656.233: 98.1786% ( 3) 00:19:12.359 14656.233 - 14715.811: 98.2433% ( 7) 00:19:12.359 14715.811 - 14775.389: 98.2988% ( 6) 00:19:12.359 14775.389 - 14834.967: 98.3728% ( 8) 00:19:12.359 14834.967 - 14894.545: 98.4098% ( 4) 00:19:12.359 14894.545 - 14954.124: 98.4560% ( 5) 00:19:12.359 14954.124 - 15013.702: 98.5392% ( 9) 00:19:12.359 15013.702 - 15073.280: 98.6594% ( 13) 00:19:12.359 15073.280 - 15132.858: 98.6871% ( 3) 00:19:12.359 15132.858 - 15192.436: 98.7056% ( 2) 00:19:12.359 15192.436 - 15252.015: 98.7334% ( 3) 00:19:12.359 15252.015 - 15371.171: 98.7703% ( 4) 00:19:12.359 15371.171 - 15490.327: 98.8166% ( 5) 00:19:12.359 25976.087 - 26095.244: 98.8443% ( 3) 00:19:12.359 26095.244 - 26214.400: 98.8720% ( 3) 00:19:12.359 26214.400 - 26333.556: 98.9090% ( 4) 00:19:12.359 26333.556 - 26452.713: 98.9368% ( 3) 00:19:12.359 26452.713 - 26571.869: 98.9737% ( 4) 00:19:12.359 26571.869 - 26691.025: 99.0107% ( 4) 00:19:12.359 26691.025 - 26810.182: 99.0385% ( 3) 00:19:12.359 26810.182 - 26929.338: 99.0662% ( 3) 00:19:12.359 26929.338 - 27048.495: 99.1032% ( 4) 00:19:12.359 27048.495 - 27167.651: 99.1402% ( 4) 00:19:12.359 27167.651 - 27286.807: 99.1679% ( 3) 00:19:12.359 27286.807 - 27405.964: 99.2049% ( 4) 00:19:12.359 27405.964 - 27525.120: 99.2326% ( 3) 00:19:12.359 27525.120 - 27644.276: 99.2696% ( 4) 00:19:12.359 27644.276 - 27763.433: 99.2973% ( 3) 00:19:12.359 27763.433 - 27882.589: 99.3251% ( 3) 00:19:12.359 27882.589 - 28001.745: 99.3621% ( 4) 00:19:12.359 28001.745 - 28120.902: 99.3990% ( 4) 00:19:12.359 28120.902 - 28240.058: 99.4083% ( 1) 00:19:12.359 33840.407 - 34078.720: 99.4453% ( 4) 00:19:12.359 34078.720 - 34317.033: 99.5007% ( 6) 00:19:12.359 34317.033 - 34555.345: 99.5747% ( 8) 00:19:12.359 34555.345 - 34793.658: 99.6487% ( 8) 00:19:12.359 34793.658 - 35031.971: 99.7041% ( 6) 00:19:12.359 35031.971 - 35270.284: 99.7596% ( 6) 00:19:12.359 35270.284 - 35508.596: 99.8243% ( 7) 00:19:12.359 35508.596 - 35746.909: 99.8983% ( 8) 00:19:12.359 35746.909 - 35985.222: 99.9538% ( 6) 00:19:12.359 35985.222 - 36223.535: 100.0000% ( 5) 00:19:12.359 00:19:12.359 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:19:12.359 ============================================================================== 00:19:12.359 Range in us Cumulative IO count 00:19:12.359 10068.713 - 10128.291: 0.0276% ( 3) 00:19:12.359 10128.291 - 10187.869: 0.0551% ( 3) 00:19:12.359 10187.869 - 10247.447: 0.1011% ( 5) 00:19:12.359 10247.447 - 10307.025: 0.1379% ( 4) 00:19:12.359 10307.025 - 10366.604: 0.2665% ( 14) 00:19:12.359 10366.604 - 10426.182: 0.7445% ( 52) 00:19:12.359 10426.182 - 10485.760: 1.4430% ( 76) 00:19:12.359 10485.760 - 10545.338: 2.5092% ( 116) 00:19:12.359 10545.338 - 10604.916: 3.8051% ( 141) 00:19:12.359 10604.916 - 10664.495: 5.4136% ( 175) 00:19:12.359 10664.495 - 10724.073: 7.4816% ( 225) 00:19:12.359 10724.073 - 10783.651: 9.6783% ( 239) 00:19:12.359 10783.651 - 10843.229: 11.9577% ( 248) 00:19:12.359 10843.229 - 10902.807: 14.4026% ( 266) 00:19:12.359 10902.807 - 10962.385: 17.3713% ( 323) 00:19:12.359 10962.385 - 11021.964: 20.2665% ( 315) 00:19:12.359 11021.964 - 11081.542: 23.5938% ( 362) 00:19:12.359 11081.542 - 11141.120: 27.5092% ( 426) 00:19:12.359 11141.120 - 11200.698: 31.2316% ( 405) 00:19:12.359 11200.698 - 11260.276: 35.0460% ( 415) 00:19:12.360 11260.276 - 11319.855: 38.9338% ( 423) 00:19:12.360 11319.855 - 11379.433: 42.6838% ( 408) 00:19:12.360 11379.433 - 11439.011: 46.2868% ( 392) 00:19:12.360 11439.011 - 11498.589: 49.9908% ( 403) 00:19:12.360 11498.589 - 11558.167: 53.3915% ( 370) 00:19:12.360 11558.167 - 11617.745: 56.8199% ( 373) 00:19:12.360 11617.745 - 11677.324: 60.2665% ( 375) 00:19:12.360 11677.324 - 11736.902: 63.3915% ( 340) 00:19:12.360 11736.902 - 11796.480: 66.2224% ( 308) 00:19:12.360 11796.480 - 11856.058: 68.7684% ( 277) 00:19:12.360 11856.058 - 11915.636: 71.1949% ( 264) 00:19:12.360 11915.636 - 11975.215: 73.6397% ( 266) 00:19:12.360 11975.215 - 12034.793: 76.1673% ( 275) 00:19:12.360 12034.793 - 12094.371: 78.4651% ( 250) 00:19:12.360 12094.371 - 12153.949: 80.4228% ( 213) 00:19:12.360 12153.949 - 12213.527: 82.2978% ( 204) 00:19:12.360 12213.527 - 12273.105: 83.8603% ( 170) 00:19:12.360 12273.105 - 12332.684: 85.3493% ( 162) 00:19:12.360 12332.684 - 12392.262: 86.5717% ( 133) 00:19:12.360 12392.262 - 12451.840: 87.7298% ( 126) 00:19:12.360 12451.840 - 12511.418: 88.8051% ( 117) 00:19:12.360 12511.418 - 12570.996: 89.7426% ( 102) 00:19:12.360 12570.996 - 12630.575: 90.5607% ( 89) 00:19:12.360 12630.575 - 12690.153: 91.4246% ( 94) 00:19:12.360 12690.153 - 12749.731: 92.0680% ( 70) 00:19:12.360 12749.731 - 12809.309: 92.7022% ( 69) 00:19:12.360 12809.309 - 12868.887: 93.4099% ( 77) 00:19:12.360 12868.887 - 12928.465: 93.9338% ( 57) 00:19:12.360 12928.465 - 12988.044: 94.4301% ( 54) 00:19:12.360 12988.044 - 13047.622: 94.9265% ( 54) 00:19:12.360 13047.622 - 13107.200: 95.3768% ( 49) 00:19:12.360 13107.200 - 13166.778: 95.8824% ( 55) 00:19:12.360 13166.778 - 13226.356: 96.1305% ( 27) 00:19:12.360 13226.356 - 13285.935: 96.3603% ( 25) 00:19:12.360 13285.935 - 13345.513: 96.5533% ( 21) 00:19:12.360 13345.513 - 13405.091: 96.7371% ( 20) 00:19:12.360 13405.091 - 13464.669: 96.8658% ( 14) 00:19:12.360 13464.669 - 13524.247: 96.9577% ( 10) 00:19:12.360 13524.247 - 13583.825: 97.0037% ( 5) 00:19:12.360 13583.825 - 13643.404: 97.0496% ( 5) 00:19:12.360 13643.404 - 13702.982: 97.0864% ( 4) 00:19:12.360 13702.982 - 13762.560: 97.1048% ( 2) 00:19:12.360 13762.560 - 13822.138: 97.1324% ( 3) 00:19:12.360 13822.138 - 13881.716: 97.1783% ( 5) 00:19:12.360 13881.716 - 13941.295: 97.2794% ( 11) 00:19:12.360 13941.295 - 14000.873: 97.3897% ( 12) 00:19:12.360 14000.873 - 14060.451: 97.5000% ( 12) 00:19:12.360 14060.451 - 14120.029: 97.6471% ( 16) 00:19:12.360 14120.029 - 14179.607: 97.7665% ( 13) 00:19:12.360 14179.607 - 14239.185: 97.9320% ( 18) 00:19:12.360 14239.185 - 14298.764: 98.0515% ( 13) 00:19:12.360 14298.764 - 14358.342: 98.1710% ( 13) 00:19:12.360 14358.342 - 14417.920: 98.2261% ( 6) 00:19:12.360 14417.920 - 14477.498: 98.3548% ( 14) 00:19:12.360 14477.498 - 14537.076: 98.4559% ( 11) 00:19:12.360 14537.076 - 14596.655: 98.5570% ( 11) 00:19:12.360 14596.655 - 14656.233: 98.6213% ( 7) 00:19:12.360 14656.233 - 14715.811: 98.7040% ( 9) 00:19:12.360 14715.811 - 14775.389: 98.7592% ( 6) 00:19:12.360 14775.389 - 14834.967: 98.7960% ( 4) 00:19:12.360 14834.967 - 14894.545: 98.8235% ( 3) 00:19:12.360 17873.455 - 17992.611: 98.8327% ( 1) 00:19:12.360 17992.611 - 18111.767: 98.8603% ( 3) 00:19:12.360 18111.767 - 18230.924: 98.8787% ( 2) 00:19:12.360 18230.924 - 18350.080: 98.9062% ( 3) 00:19:12.360 18350.080 - 18469.236: 98.9338% ( 3) 00:19:12.360 18469.236 - 18588.393: 98.9522% ( 2) 00:19:12.360 18588.393 - 18707.549: 98.9798% ( 3) 00:19:12.360 18707.549 - 18826.705: 98.9982% ( 2) 00:19:12.360 18826.705 - 18945.862: 99.0349% ( 4) 00:19:12.360 18945.862 - 19065.018: 99.0625% ( 3) 00:19:12.360 19065.018 - 19184.175: 99.0901% ( 3) 00:19:12.360 19184.175 - 19303.331: 99.1176% ( 3) 00:19:12.360 19303.331 - 19422.487: 99.1452% ( 3) 00:19:12.360 19422.487 - 19541.644: 99.1820% ( 4) 00:19:12.360 19541.644 - 19660.800: 99.2096% ( 3) 00:19:12.360 19660.800 - 19779.956: 99.2463% ( 4) 00:19:12.360 19779.956 - 19899.113: 99.2831% ( 4) 00:19:12.360 19899.113 - 20018.269: 99.3199% ( 4) 00:19:12.360 20018.269 - 20137.425: 99.3474% ( 3) 00:19:12.360 20137.425 - 20256.582: 99.3842% ( 4) 00:19:12.360 20256.582 - 20375.738: 99.4118% ( 3) 00:19:12.360 25380.305 - 25499.462: 99.4393% ( 3) 00:19:12.360 25499.462 - 25618.618: 99.4761% ( 4) 00:19:12.360 25618.618 - 25737.775: 99.5037% ( 3) 00:19:12.360 25737.775 - 25856.931: 99.5404% ( 4) 00:19:12.360 25856.931 - 25976.087: 99.5680% ( 3) 00:19:12.360 25976.087 - 26095.244: 99.6048% ( 4) 00:19:12.360 26095.244 - 26214.400: 99.6324% ( 3) 00:19:12.360 26214.400 - 26333.556: 99.6691% ( 4) 00:19:12.360 26333.556 - 26452.713: 99.7059% ( 4) 00:19:12.360 26452.713 - 26571.869: 99.7426% ( 4) 00:19:12.360 26571.869 - 26691.025: 99.7702% ( 3) 00:19:12.360 26691.025 - 26810.182: 99.7978% ( 3) 00:19:12.360 26810.182 - 26929.338: 99.8346% ( 4) 00:19:12.360 26929.338 - 27048.495: 99.8621% ( 3) 00:19:12.360 27048.495 - 27167.651: 99.8989% ( 4) 00:19:12.360 27167.651 - 27286.807: 99.9265% ( 3) 00:19:12.360 27286.807 - 27405.964: 99.9632% ( 4) 00:19:12.360 27405.964 - 27525.120: 100.0000% ( 4) 00:19:12.360 00:19:12.360 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:19:12.360 ============================================================================== 00:19:12.360 Range in us Cumulative IO count 00:19:12.360 10068.713 - 10128.291: 0.0184% ( 2) 00:19:12.360 10128.291 - 10187.869: 0.0827% ( 7) 00:19:12.360 10187.869 - 10247.447: 0.1287% ( 5) 00:19:12.360 10247.447 - 10307.025: 0.2390% ( 12) 00:19:12.360 10307.025 - 10366.604: 0.6066% ( 40) 00:19:12.360 10366.604 - 10426.182: 1.1581% ( 60) 00:19:12.360 10426.182 - 10485.760: 1.9577% ( 87) 00:19:12.360 10485.760 - 10545.338: 2.9228% ( 105) 00:19:12.360 10545.338 - 10604.916: 4.2004% ( 139) 00:19:12.360 10604.916 - 10664.495: 5.6526% ( 158) 00:19:12.360 10664.495 - 10724.073: 7.9136% ( 246) 00:19:12.360 10724.073 - 10783.651: 10.1379% ( 242) 00:19:12.360 10783.651 - 10843.229: 12.1507% ( 219) 00:19:12.360 10843.229 - 10902.807: 14.3107% ( 235) 00:19:12.360 10902.807 - 10962.385: 16.9945% ( 292) 00:19:12.360 10962.385 - 11021.964: 20.1930% ( 348) 00:19:12.360 11021.964 - 11081.542: 23.5018% ( 360) 00:19:12.360 11081.542 - 11141.120: 26.7463% ( 353) 00:19:12.360 11141.120 - 11200.698: 30.6710% ( 427) 00:19:12.360 11200.698 - 11260.276: 33.9614% ( 358) 00:19:12.360 11260.276 - 11319.855: 37.7022% ( 407) 00:19:12.360 11319.855 - 11379.433: 41.3695% ( 399) 00:19:12.360 11379.433 - 11439.011: 45.5055% ( 450) 00:19:12.360 11439.011 - 11498.589: 49.4393% ( 428) 00:19:12.360 11498.589 - 11558.167: 53.0607% ( 394) 00:19:12.360 11558.167 - 11617.745: 56.6820% ( 394) 00:19:12.360 11617.745 - 11677.324: 59.8346% ( 343) 00:19:12.360 11677.324 - 11736.902: 62.9320% ( 337) 00:19:12.360 11736.902 - 11796.480: 66.6360% ( 403) 00:19:12.360 11796.480 - 11856.058: 69.6691% ( 330) 00:19:12.360 11856.058 - 11915.636: 72.0772% ( 262) 00:19:12.360 11915.636 - 11975.215: 74.2923% ( 241) 00:19:12.360 11975.215 - 12034.793: 76.8015% ( 273) 00:19:12.360 12034.793 - 12094.371: 79.2371% ( 265) 00:19:12.360 12094.371 - 12153.949: 81.0018% ( 192) 00:19:12.360 12153.949 - 12213.527: 82.5184% ( 165) 00:19:12.360 12213.527 - 12273.105: 84.0349% ( 165) 00:19:12.360 12273.105 - 12332.684: 85.4963% ( 159) 00:19:12.360 12332.684 - 12392.262: 86.5717% ( 117) 00:19:12.360 12392.262 - 12451.840: 87.5276% ( 104) 00:19:12.360 12451.840 - 12511.418: 88.5754% ( 114) 00:19:12.360 12511.418 - 12570.996: 89.3934% ( 89) 00:19:12.360 12570.996 - 12630.575: 90.3217% ( 101) 00:19:12.360 12630.575 - 12690.153: 91.0478% ( 79) 00:19:12.360 12690.153 - 12749.731: 91.6636% ( 67) 00:19:12.360 12749.731 - 12809.309: 92.2610% ( 65) 00:19:12.360 12809.309 - 12868.887: 92.8768% ( 67) 00:19:12.360 12868.887 - 12928.465: 93.4743% ( 65) 00:19:12.360 12928.465 - 12988.044: 93.9982% ( 57) 00:19:12.360 12988.044 - 13047.622: 94.4669% ( 51) 00:19:12.360 13047.622 - 13107.200: 94.8254% ( 39) 00:19:12.360 13107.200 - 13166.778: 95.1471% ( 35) 00:19:12.360 13166.778 - 13226.356: 95.7169% ( 62) 00:19:12.360 13226.356 - 13285.935: 96.1489% ( 47) 00:19:12.360 13285.935 - 13345.513: 96.3327% ( 20) 00:19:12.360 13345.513 - 13405.091: 96.5165% ( 20) 00:19:12.360 13405.091 - 13464.669: 96.6452% ( 14) 00:19:12.360 13464.669 - 13524.247: 96.7923% ( 16) 00:19:12.360 13524.247 - 13583.825: 96.8934% ( 11) 00:19:12.360 13583.825 - 13643.404: 96.9393% ( 5) 00:19:12.360 13643.404 - 13702.982: 96.9761% ( 4) 00:19:12.360 13702.982 - 13762.560: 96.9945% ( 2) 00:19:12.360 13762.560 - 13822.138: 97.0129% ( 2) 00:19:12.360 13822.138 - 13881.716: 97.0404% ( 3) 00:19:12.360 13881.716 - 13941.295: 97.0956% ( 6) 00:19:12.360 13941.295 - 14000.873: 97.1599% ( 7) 00:19:12.360 14000.873 - 14060.451: 97.2886% ( 14) 00:19:12.360 14060.451 - 14120.029: 97.3989% ( 12) 00:19:12.360 14120.029 - 14179.607: 97.5000% ( 11) 00:19:12.360 14179.607 - 14239.185: 97.6287% ( 14) 00:19:12.360 14239.185 - 14298.764: 97.8125% ( 20) 00:19:12.360 14298.764 - 14358.342: 97.8952% ( 9) 00:19:12.360 14358.342 - 14417.920: 98.0974% ( 22) 00:19:12.360 14417.920 - 14477.498: 98.1710% ( 8) 00:19:12.360 14477.498 - 14537.076: 98.2537% ( 9) 00:19:12.360 14537.076 - 14596.655: 98.3272% ( 8) 00:19:12.360 14596.655 - 14656.233: 98.3915% ( 7) 00:19:12.360 14656.233 - 14715.811: 98.4743% ( 9) 00:19:12.360 14715.811 - 14775.389: 98.5202% ( 5) 00:19:12.360 14775.389 - 14834.967: 98.5662% ( 5) 00:19:12.360 14834.967 - 14894.545: 98.6305% ( 7) 00:19:12.360 14894.545 - 14954.124: 98.6857% ( 6) 00:19:12.360 14954.124 - 15013.702: 98.7132% ( 3) 00:19:12.361 15013.702 - 15073.280: 98.7316% ( 2) 00:19:12.361 15073.280 - 15132.858: 98.7408% ( 1) 00:19:12.361 15132.858 - 15192.436: 98.7592% ( 2) 00:19:12.361 15192.436 - 15252.015: 98.7684% ( 1) 00:19:12.361 15252.015 - 15371.171: 98.8051% ( 4) 00:19:12.361 15371.171 - 15490.327: 98.8235% ( 2) 00:19:12.361 15728.640 - 15847.796: 98.8879% ( 7) 00:19:12.361 15847.796 - 15966.953: 98.9154% ( 3) 00:19:12.361 15966.953 - 16086.109: 98.9522% ( 4) 00:19:12.361 16086.109 - 16205.265: 98.9798% ( 3) 00:19:12.361 16205.265 - 16324.422: 99.0165% ( 4) 00:19:12.361 16324.422 - 16443.578: 99.0441% ( 3) 00:19:12.361 16443.578 - 16562.735: 99.0717% ( 3) 00:19:12.361 16562.735 - 16681.891: 99.1085% ( 4) 00:19:12.361 16681.891 - 16801.047: 99.1360% ( 3) 00:19:12.361 16801.047 - 16920.204: 99.1636% ( 3) 00:19:12.361 16920.204 - 17039.360: 99.2004% ( 4) 00:19:12.361 17039.360 - 17158.516: 99.2279% ( 3) 00:19:12.361 17158.516 - 17277.673: 99.2555% ( 3) 00:19:12.361 17277.673 - 17396.829: 99.2923% ( 4) 00:19:12.361 17396.829 - 17515.985: 99.3199% ( 3) 00:19:12.361 17515.985 - 17635.142: 99.3566% ( 4) 00:19:12.361 17635.142 - 17754.298: 99.3842% ( 3) 00:19:12.361 17754.298 - 17873.455: 99.4118% ( 3) 00:19:12.361 22878.022 - 22997.178: 99.4301% ( 2) 00:19:12.361 22997.178 - 23116.335: 99.4577% ( 3) 00:19:12.361 23116.335 - 23235.491: 99.4853% ( 3) 00:19:12.361 23235.491 - 23354.647: 99.5221% ( 4) 00:19:12.361 23354.647 - 23473.804: 99.5496% ( 3) 00:19:12.361 23473.804 - 23592.960: 99.5864% ( 4) 00:19:12.361 23592.960 - 23712.116: 99.6232% ( 4) 00:19:12.361 23712.116 - 23831.273: 99.6507% ( 3) 00:19:12.361 23831.273 - 23950.429: 99.6875% ( 4) 00:19:12.361 23950.429 - 24069.585: 99.7243% ( 4) 00:19:12.361 24069.585 - 24188.742: 99.7610% ( 4) 00:19:12.361 24188.742 - 24307.898: 99.7978% ( 4) 00:19:12.361 24307.898 - 24427.055: 99.8254% ( 3) 00:19:12.361 24427.055 - 24546.211: 99.8529% ( 3) 00:19:12.361 24546.211 - 24665.367: 99.8897% ( 4) 00:19:12.361 24665.367 - 24784.524: 99.9173% ( 3) 00:19:12.361 24784.524 - 24903.680: 99.9540% ( 4) 00:19:12.361 24903.680 - 25022.836: 99.9908% ( 4) 00:19:12.361 25022.836 - 25141.993: 100.0000% ( 1) 00:19:12.361 00:19:12.361 ************************************ 00:19:12.361 END TEST nvme_perf 00:19:12.361 ************************************ 00:19:12.361 11:33:18 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:19:12.361 00:19:12.361 real 0m2.879s 00:19:12.361 user 0m2.351s 00:19:12.361 sys 0m0.406s 00:19:12.361 11:33:18 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:12.361 11:33:18 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:19:12.361 11:33:18 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:19:12.361 11:33:18 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:12.361 11:33:18 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:12.361 11:33:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:12.361 ************************************ 00:19:12.361 START TEST nvme_hello_world 00:19:12.361 ************************************ 00:19:12.361 11:33:18 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:19:12.927 Initializing NVMe Controllers 00:19:12.927 Attached to 0000:00:10.0 00:19:12.927 Namespace ID: 1 size: 6GB 00:19:12.927 Attached to 0000:00:11.0 00:19:12.927 Namespace ID: 1 size: 5GB 00:19:12.927 Attached to 0000:00:13.0 00:19:12.927 Namespace ID: 1 size: 1GB 00:19:12.927 Attached to 0000:00:12.0 00:19:12.927 Namespace ID: 1 size: 4GB 00:19:12.927 Namespace ID: 2 size: 4GB 00:19:12.927 Namespace ID: 3 size: 4GB 00:19:12.927 Initialization complete. 00:19:12.927 INFO: using host memory buffer for IO 00:19:12.927 Hello world! 00:19:12.927 INFO: using host memory buffer for IO 00:19:12.927 Hello world! 00:19:12.927 INFO: using host memory buffer for IO 00:19:12.927 Hello world! 00:19:12.927 INFO: using host memory buffer for IO 00:19:12.927 Hello world! 00:19:12.927 INFO: using host memory buffer for IO 00:19:12.927 Hello world! 00:19:12.927 INFO: using host memory buffer for IO 00:19:12.927 Hello world! 00:19:12.927 ************************************ 00:19:12.927 END TEST nvme_hello_world 00:19:12.927 ************************************ 00:19:12.927 00:19:12.927 real 0m0.368s 00:19:12.927 user 0m0.151s 00:19:12.927 sys 0m0.165s 00:19:12.927 11:33:18 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:12.927 11:33:18 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:12.927 11:33:18 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:19:12.927 11:33:18 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:12.927 11:33:18 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:12.927 11:33:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:12.927 ************************************ 00:19:12.927 START TEST nvme_sgl 00:19:12.927 ************************************ 00:19:12.927 11:33:18 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:19:13.185 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:19:13.185 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:19:13.185 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:19:13.185 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:19:13.185 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:19:13.185 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:19:13.185 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:19:13.185 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:19:13.185 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:19:13.185 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:19:13.185 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:19:13.185 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:19:13.185 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:19:13.185 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:19:13.185 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:19:13.185 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:19:13.185 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:19:13.185 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:19:13.185 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:19:13.185 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:19:13.185 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:19:13.185 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:19:13.185 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:19:13.185 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:19:13.185 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:19:13.185 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:19:13.185 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:19:13.185 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:19:13.185 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:19:13.185 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:19:13.185 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:19:13.185 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:19:13.185 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:19:13.185 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:19:13.185 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:19:13.185 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:19:13.443 NVMe Readv/Writev Request test 00:19:13.443 Attached to 0000:00:10.0 00:19:13.443 Attached to 0000:00:11.0 00:19:13.443 Attached to 0000:00:13.0 00:19:13.443 Attached to 0000:00:12.0 00:19:13.443 0000:00:10.0: build_io_request_2 test passed 00:19:13.443 0000:00:10.0: build_io_request_4 test passed 00:19:13.443 0000:00:10.0: build_io_request_5 test passed 00:19:13.443 0000:00:10.0: build_io_request_6 test passed 00:19:13.443 0000:00:10.0: build_io_request_7 test passed 00:19:13.443 0000:00:10.0: build_io_request_10 test passed 00:19:13.443 0000:00:11.0: build_io_request_2 test passed 00:19:13.443 0000:00:11.0: build_io_request_4 test passed 00:19:13.443 0000:00:11.0: build_io_request_5 test passed 00:19:13.443 0000:00:11.0: build_io_request_6 test passed 00:19:13.443 0000:00:11.0: build_io_request_7 test passed 00:19:13.443 0000:00:11.0: build_io_request_10 test passed 00:19:13.443 Cleaning up... 00:19:13.443 ************************************ 00:19:13.443 END TEST nvme_sgl 00:19:13.443 ************************************ 00:19:13.443 00:19:13.443 real 0m0.465s 00:19:13.443 user 0m0.240s 00:19:13.443 sys 0m0.176s 00:19:13.443 11:33:18 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:13.443 11:33:18 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:19:13.443 11:33:19 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:19:13.443 11:33:19 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:13.443 11:33:19 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:13.443 11:33:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:13.443 ************************************ 00:19:13.443 START TEST nvme_e2edp 00:19:13.443 ************************************ 00:19:13.443 11:33:19 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:19:13.699 NVMe Write/Read with End-to-End data protection test 00:19:13.699 Attached to 0000:00:10.0 00:19:13.699 Attached to 0000:00:11.0 00:19:13.699 Attached to 0000:00:13.0 00:19:13.699 Attached to 0000:00:12.0 00:19:13.699 Cleaning up... 00:19:13.699 ************************************ 00:19:13.699 END TEST nvme_e2edp 00:19:13.699 ************************************ 00:19:13.699 00:19:13.699 real 0m0.350s 00:19:13.699 user 0m0.130s 00:19:13.699 sys 0m0.166s 00:19:13.699 11:33:19 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:13.699 11:33:19 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:19:13.699 11:33:19 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:19:13.699 11:33:19 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:13.699 11:33:19 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:13.699 11:33:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:13.699 ************************************ 00:19:13.699 START TEST nvme_reserve 00:19:13.699 ************************************ 00:19:13.699 11:33:19 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:19:14.265 ===================================================== 00:19:14.265 NVMe Controller at PCI bus 0, device 16, function 0 00:19:14.265 ===================================================== 00:19:14.265 Reservations: Not Supported 00:19:14.265 ===================================================== 00:19:14.265 NVMe Controller at PCI bus 0, device 17, function 0 00:19:14.265 ===================================================== 00:19:14.265 Reservations: Not Supported 00:19:14.265 ===================================================== 00:19:14.265 NVMe Controller at PCI bus 0, device 19, function 0 00:19:14.265 ===================================================== 00:19:14.265 Reservations: Not Supported 00:19:14.265 ===================================================== 00:19:14.265 NVMe Controller at PCI bus 0, device 18, function 0 00:19:14.265 ===================================================== 00:19:14.265 Reservations: Not Supported 00:19:14.265 Reservation test passed 00:19:14.265 ************************************ 00:19:14.265 END TEST nvme_reserve 00:19:14.265 ************************************ 00:19:14.265 00:19:14.265 real 0m0.369s 00:19:14.265 user 0m0.152s 00:19:14.265 sys 0m0.160s 00:19:14.265 11:33:19 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:14.265 11:33:19 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:19:14.265 11:33:19 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:19:14.265 11:33:19 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:14.265 11:33:19 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:14.265 11:33:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:14.265 ************************************ 00:19:14.265 START TEST nvme_err_injection 00:19:14.265 ************************************ 00:19:14.265 11:33:19 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:19:14.523 NVMe Error Injection test 00:19:14.523 Attached to 0000:00:10.0 00:19:14.523 Attached to 0000:00:11.0 00:19:14.523 Attached to 0000:00:13.0 00:19:14.523 Attached to 0000:00:12.0 00:19:14.523 0000:00:11.0: get features failed as expected 00:19:14.523 0000:00:13.0: get features failed as expected 00:19:14.523 0000:00:12.0: get features failed as expected 00:19:14.523 0000:00:10.0: get features failed as expected 00:19:14.523 0000:00:12.0: get features successfully as expected 00:19:14.523 0000:00:10.0: get features successfully as expected 00:19:14.523 0000:00:11.0: get features successfully as expected 00:19:14.523 0000:00:13.0: get features successfully as expected 00:19:14.523 0000:00:10.0: read failed as expected 00:19:14.523 0000:00:11.0: read failed as expected 00:19:14.523 0000:00:13.0: read failed as expected 00:19:14.523 0000:00:12.0: read failed as expected 00:19:14.523 0000:00:10.0: read successfully as expected 00:19:14.523 0000:00:11.0: read successfully as expected 00:19:14.523 0000:00:13.0: read successfully as expected 00:19:14.523 0000:00:12.0: read successfully as expected 00:19:14.523 Cleaning up... 00:19:14.523 ************************************ 00:19:14.523 END TEST nvme_err_injection 00:19:14.523 ************************************ 00:19:14.523 00:19:14.523 real 0m0.373s 00:19:14.523 user 0m0.152s 00:19:14.523 sys 0m0.172s 00:19:14.523 11:33:20 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:14.523 11:33:20 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:19:14.523 11:33:20 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:19:14.523 11:33:20 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:19:14.523 11:33:20 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:14.523 11:33:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:14.523 ************************************ 00:19:14.523 START TEST nvme_overhead 00:19:14.523 ************************************ 00:19:14.523 11:33:20 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:19:15.898 Initializing NVMe Controllers 00:19:15.898 Attached to 0000:00:10.0 00:19:15.898 Attached to 0000:00:11.0 00:19:15.898 Attached to 0000:00:13.0 00:19:15.898 Attached to 0000:00:12.0 00:19:15.898 Initialization complete. Launching workers. 00:19:15.898 submit (in ns) avg, min, max = 15409.9, 12580.0, 55015.9 00:19:15.898 complete (in ns) avg, min, max = 10341.8, 9095.5, 126738.6 00:19:15.898 00:19:15.898 Submit histogram 00:19:15.898 ================ 00:19:15.898 Range in us Cumulative Count 00:19:15.898 12.567 - 12.625: 0.0108% ( 1) 00:19:15.899 12.742 - 12.800: 0.0216% ( 1) 00:19:15.899 12.800 - 12.858: 0.0324% ( 1) 00:19:15.899 12.858 - 12.916: 0.0540% ( 2) 00:19:15.899 12.975 - 13.033: 0.1404% ( 8) 00:19:15.899 13.033 - 13.091: 0.2484% ( 10) 00:19:15.899 13.091 - 13.149: 0.4535% ( 19) 00:19:15.899 13.149 - 13.207: 0.5831% ( 12) 00:19:15.899 13.207 - 13.265: 0.7019% ( 11) 00:19:15.899 13.265 - 13.324: 0.9286% ( 21) 00:19:15.899 13.324 - 13.382: 1.7817% ( 79) 00:19:15.899 13.382 - 13.440: 4.1140% ( 216) 00:19:15.899 13.440 - 13.498: 8.3144% ( 389) 00:19:15.899 13.498 - 13.556: 13.3787% ( 469) 00:19:15.899 13.556 - 13.615: 17.2120% ( 355) 00:19:15.899 13.615 - 13.673: 20.2894% ( 285) 00:19:15.899 13.673 - 13.731: 22.4490% ( 200) 00:19:15.899 13.731 - 13.789: 24.2522% ( 167) 00:19:15.899 13.789 - 13.847: 25.8611% ( 149) 00:19:15.899 13.847 - 13.905: 27.0705% ( 112) 00:19:15.899 13.905 - 13.964: 27.9991% ( 86) 00:19:15.899 13.964 - 14.022: 28.6254% ( 58) 00:19:15.899 14.022 - 14.080: 29.0681% ( 41) 00:19:15.899 14.080 - 14.138: 29.4893% ( 39) 00:19:15.899 14.138 - 14.196: 29.7484% ( 24) 00:19:15.899 14.196 - 14.255: 29.8996% ( 14) 00:19:15.899 14.255 - 14.313: 30.0292% ( 12) 00:19:15.899 14.313 - 14.371: 30.2235% ( 18) 00:19:15.899 14.371 - 14.429: 30.5475% ( 30) 00:19:15.899 14.429 - 14.487: 31.8324% ( 119) 00:19:15.899 14.487 - 14.545: 34.6075% ( 257) 00:19:15.899 14.545 - 14.604: 39.3046% ( 435) 00:19:15.899 14.604 - 14.662: 45.2867% ( 554) 00:19:15.899 14.662 - 14.720: 50.9880% ( 528) 00:19:15.899 14.720 - 14.778: 56.0307% ( 467) 00:19:15.899 14.778 - 14.836: 60.1663% ( 383) 00:19:15.899 14.836 - 14.895: 64.1183% ( 366) 00:19:15.899 14.895 - 15.011: 68.7831% ( 432) 00:19:15.899 15.011 - 15.127: 72.2492% ( 321) 00:19:15.899 15.127 - 15.244: 74.3872% ( 198) 00:19:15.899 15.244 - 15.360: 76.1905% ( 167) 00:19:15.899 15.360 - 15.476: 77.4646% ( 118) 00:19:15.899 15.476 - 15.593: 78.1233% ( 61) 00:19:15.899 15.593 - 15.709: 78.7172% ( 55) 00:19:15.899 15.709 - 15.825: 79.2247% ( 47) 00:19:15.899 15.825 - 15.942: 79.5270% ( 28) 00:19:15.899 15.942 - 16.058: 79.7754% ( 23) 00:19:15.899 16.058 - 16.175: 79.9914% ( 20) 00:19:15.899 16.175 - 16.291: 80.1425% ( 14) 00:19:15.899 16.291 - 16.407: 80.2181% ( 7) 00:19:15.899 16.407 - 16.524: 80.2505% ( 3) 00:19:15.899 16.524 - 16.640: 80.3153% ( 6) 00:19:15.899 16.640 - 16.756: 80.3585% ( 4) 00:19:15.899 16.756 - 16.873: 80.3909% ( 3) 00:19:15.899 16.873 - 16.989: 80.4233% ( 3) 00:19:15.899 17.105 - 17.222: 80.4341% ( 1) 00:19:15.899 17.222 - 17.338: 80.4665% ( 3) 00:19:15.899 17.338 - 17.455: 80.4881% ( 2) 00:19:15.899 17.455 - 17.571: 80.6068% ( 11) 00:19:15.899 17.571 - 17.687: 81.8378% ( 114) 00:19:15.899 17.687 - 17.804: 85.0772% ( 300) 00:19:15.899 17.804 - 17.920: 88.2086% ( 290) 00:19:15.899 17.920 - 18.036: 89.9903% ( 165) 00:19:15.899 18.036 - 18.153: 90.8973% ( 84) 00:19:15.899 18.153 - 18.269: 91.5452% ( 60) 00:19:15.899 18.269 - 18.385: 92.0635% ( 48) 00:19:15.899 18.385 - 18.502: 92.8193% ( 70) 00:19:15.899 18.502 - 18.618: 93.3269% ( 47) 00:19:15.899 18.618 - 18.735: 93.6832% ( 33) 00:19:15.899 18.735 - 18.851: 93.9423% ( 24) 00:19:15.899 18.851 - 18.967: 94.0719% ( 12) 00:19:15.899 18.967 - 19.084: 94.1475% ( 7) 00:19:15.899 19.084 - 19.200: 94.2987% ( 14) 00:19:15.899 19.200 - 19.316: 94.4282% ( 12) 00:19:15.899 19.316 - 19.433: 94.5146% ( 8) 00:19:15.899 19.433 - 19.549: 94.6226% ( 10) 00:19:15.899 19.549 - 19.665: 94.7090% ( 8) 00:19:15.899 19.665 - 19.782: 94.7954% ( 8) 00:19:15.899 19.782 - 19.898: 94.9034% ( 10) 00:19:15.899 19.898 - 20.015: 95.0113% ( 10) 00:19:15.899 20.015 - 20.131: 95.1085% ( 9) 00:19:15.899 20.131 - 20.247: 95.2705% ( 15) 00:19:15.899 20.247 - 20.364: 95.4864% ( 20) 00:19:15.899 20.364 - 20.480: 95.6592% ( 16) 00:19:15.899 20.480 - 20.596: 95.8212% ( 15) 00:19:15.899 20.596 - 20.713: 95.9616% ( 13) 00:19:15.899 20.713 - 20.829: 96.1127% ( 14) 00:19:15.899 20.829 - 20.945: 96.2531% ( 13) 00:19:15.899 20.945 - 21.062: 96.3395% ( 8) 00:19:15.899 21.062 - 21.178: 96.4907% ( 14) 00:19:15.899 21.178 - 21.295: 96.5878% ( 9) 00:19:15.899 21.295 - 21.411: 96.7282% ( 13) 00:19:15.899 21.411 - 21.527: 96.8578% ( 12) 00:19:15.899 21.527 - 21.644: 96.9334% ( 7) 00:19:15.899 21.644 - 21.760: 97.0414% ( 10) 00:19:15.899 21.760 - 21.876: 97.1061% ( 6) 00:19:15.899 21.876 - 21.993: 97.1709% ( 6) 00:19:15.899 21.993 - 22.109: 97.2249% ( 5) 00:19:15.899 22.109 - 22.225: 97.3113% ( 8) 00:19:15.899 22.225 - 22.342: 97.3329% ( 2) 00:19:15.899 22.342 - 22.458: 97.3761% ( 4) 00:19:15.899 22.458 - 22.575: 97.4409% ( 6) 00:19:15.899 22.575 - 22.691: 97.5165% ( 7) 00:19:15.899 22.691 - 22.807: 97.5381% ( 2) 00:19:15.899 22.807 - 22.924: 97.5813% ( 4) 00:19:15.899 22.924 - 23.040: 97.6352% ( 5) 00:19:15.899 23.156 - 23.273: 97.6568% ( 2) 00:19:15.899 23.273 - 23.389: 97.7108% ( 5) 00:19:15.899 23.389 - 23.505: 97.7324% ( 2) 00:19:15.899 23.505 - 23.622: 97.7864% ( 5) 00:19:15.899 23.622 - 23.738: 97.8512% ( 6) 00:19:15.899 23.738 - 23.855: 97.9052% ( 5) 00:19:15.899 23.855 - 23.971: 97.9484% ( 4) 00:19:15.899 23.971 - 24.087: 97.9808% ( 3) 00:19:15.899 24.087 - 24.204: 98.0996% ( 11) 00:19:15.899 24.204 - 24.320: 98.1967% ( 9) 00:19:15.899 24.320 - 24.436: 98.2507% ( 5) 00:19:15.899 24.436 - 24.553: 98.2831% ( 3) 00:19:15.899 24.553 - 24.669: 98.3155% ( 3) 00:19:15.899 24.669 - 24.785: 98.3695% ( 5) 00:19:15.899 24.785 - 24.902: 98.4343% ( 6) 00:19:15.899 24.902 - 25.018: 98.4775% ( 4) 00:19:15.899 25.018 - 25.135: 98.4991% ( 2) 00:19:15.899 25.135 - 25.251: 98.5963% ( 9) 00:19:15.899 25.251 - 25.367: 98.6287% ( 3) 00:19:15.899 25.367 - 25.484: 98.6934% ( 6) 00:19:15.899 25.484 - 25.600: 98.7258% ( 3) 00:19:15.899 25.600 - 25.716: 98.7582% ( 3) 00:19:15.899 25.716 - 25.833: 98.8014% ( 4) 00:19:15.899 25.833 - 25.949: 98.8122% ( 1) 00:19:15.899 25.949 - 26.065: 98.8338% ( 2) 00:19:15.899 26.065 - 26.182: 98.8770% ( 4) 00:19:15.899 26.182 - 26.298: 98.9094% ( 3) 00:19:15.899 26.298 - 26.415: 98.9742% ( 6) 00:19:15.899 26.415 - 26.531: 98.9850% ( 1) 00:19:15.899 26.531 - 26.647: 99.0066% ( 2) 00:19:15.899 26.647 - 26.764: 99.0174% ( 1) 00:19:15.899 26.996 - 27.113: 99.0606% ( 4) 00:19:15.899 27.229 - 27.345: 99.0822% ( 2) 00:19:15.899 27.462 - 27.578: 99.0930% ( 1) 00:19:15.899 27.578 - 27.695: 99.1038% ( 1) 00:19:15.899 27.695 - 27.811: 99.1362% ( 3) 00:19:15.899 27.811 - 27.927: 99.1578% ( 2) 00:19:15.899 27.927 - 28.044: 99.1794% ( 2) 00:19:15.899 28.044 - 28.160: 99.2657% ( 8) 00:19:15.899 28.160 - 28.276: 99.3089% ( 4) 00:19:15.899 28.276 - 28.393: 99.3413% ( 3) 00:19:15.899 28.393 - 28.509: 99.3953% ( 5) 00:19:15.899 28.509 - 28.625: 99.4385% ( 4) 00:19:15.899 28.625 - 28.742: 99.4817% ( 4) 00:19:15.899 28.742 - 28.858: 99.5141% ( 3) 00:19:15.899 28.858 - 28.975: 99.5573% ( 4) 00:19:15.899 29.091 - 29.207: 99.5789% ( 2) 00:19:15.899 29.207 - 29.324: 99.5897% ( 1) 00:19:15.899 29.324 - 29.440: 99.6221% ( 3) 00:19:15.899 29.440 - 29.556: 99.6329% ( 1) 00:19:15.899 29.673 - 29.789: 99.6437% ( 1) 00:19:15.899 29.789 - 30.022: 99.7085% ( 6) 00:19:15.899 30.022 - 30.255: 99.7408% ( 3) 00:19:15.899 30.255 - 30.487: 99.7624% ( 2) 00:19:15.899 30.487 - 30.720: 99.8164% ( 5) 00:19:15.899 30.720 - 30.953: 99.8272% ( 1) 00:19:15.899 30.953 - 31.185: 99.8380% ( 1) 00:19:15.899 31.185 - 31.418: 99.8596% ( 2) 00:19:15.899 31.418 - 31.651: 99.8704% ( 1) 00:19:15.899 32.582 - 32.815: 99.8812% ( 1) 00:19:15.899 32.815 - 33.047: 99.8920% ( 1) 00:19:15.899 33.280 - 33.513: 99.9028% ( 1) 00:19:15.899 33.513 - 33.745: 99.9136% ( 1) 00:19:15.899 34.211 - 34.444: 99.9244% ( 1) 00:19:15.899 34.909 - 35.142: 99.9352% ( 1) 00:19:15.899 35.142 - 35.375: 99.9460% ( 1) 00:19:15.899 36.538 - 36.771: 99.9568% ( 1) 00:19:15.899 39.564 - 39.796: 99.9676% ( 1) 00:19:15.899 44.218 - 44.451: 99.9784% ( 1) 00:19:15.899 46.545 - 46.778: 99.9892% ( 1) 00:19:15.899 54.924 - 55.156: 100.0000% ( 1) 00:19:15.899 00:19:15.899 Complete histogram 00:19:15.899 ================== 00:19:15.899 Range in us Cumulative Count 00:19:15.899 9.076 - 9.135: 0.0216% ( 2) 00:19:15.899 9.135 - 9.193: 0.2376% ( 20) 00:19:15.899 9.193 - 9.251: 1.4685% ( 114) 00:19:15.900 9.251 - 9.309: 4.7295% ( 302) 00:19:15.900 9.309 - 9.367: 11.0031% ( 581) 00:19:15.900 9.367 - 9.425: 19.8899% ( 823) 00:19:15.900 9.425 - 9.484: 28.3015% ( 779) 00:19:15.900 9.484 - 9.542: 34.7263% ( 595) 00:19:15.900 9.542 - 9.600: 40.7299% ( 556) 00:19:15.900 9.600 - 9.658: 47.3491% ( 613) 00:19:15.900 9.658 - 9.716: 54.1302% ( 628) 00:19:15.900 9.716 - 9.775: 60.3930% ( 580) 00:19:15.900 9.775 - 9.833: 64.9606% ( 423) 00:19:15.900 9.833 - 9.891: 67.7465% ( 258) 00:19:15.900 9.891 - 9.949: 69.8629% ( 196) 00:19:15.900 9.949 - 10.007: 70.9319% ( 99) 00:19:15.900 10.007 - 10.065: 71.8173% ( 82) 00:19:15.900 10.065 - 10.124: 72.4976% ( 63) 00:19:15.900 10.124 - 10.182: 73.1131% ( 57) 00:19:15.900 10.182 - 10.240: 73.9553% ( 78) 00:19:15.900 10.240 - 10.298: 75.0675% ( 103) 00:19:15.900 10.298 - 10.356: 76.1149% ( 97) 00:19:15.900 10.356 - 10.415: 76.8923% ( 72) 00:19:15.900 10.415 - 10.473: 77.6482% ( 70) 00:19:15.900 10.473 - 10.531: 78.3501% ( 65) 00:19:15.900 10.531 - 10.589: 79.0519% ( 65) 00:19:15.900 10.589 - 10.647: 79.5270% ( 44) 00:19:15.900 10.647 - 10.705: 79.9050% ( 35) 00:19:15.900 10.705 - 10.764: 80.2181% ( 29) 00:19:15.900 10.764 - 10.822: 80.4125% ( 18) 00:19:15.900 10.822 - 10.880: 80.5529% ( 13) 00:19:15.900 10.880 - 10.938: 80.6500% ( 9) 00:19:15.900 10.938 - 10.996: 80.6824% ( 3) 00:19:15.900 10.996 - 11.055: 80.7256% ( 4) 00:19:15.900 11.055 - 11.113: 80.7472% ( 2) 00:19:15.900 11.113 - 11.171: 80.7796% ( 3) 00:19:15.900 11.171 - 11.229: 80.7904% ( 1) 00:19:15.900 11.287 - 11.345: 80.8228% ( 3) 00:19:15.900 11.345 - 11.404: 80.8660% ( 4) 00:19:15.900 11.404 - 11.462: 80.9848% ( 11) 00:19:15.900 11.462 - 11.520: 81.4275% ( 41) 00:19:15.900 11.520 - 11.578: 82.8744% ( 134) 00:19:15.900 11.578 - 11.636: 84.6885% ( 168) 00:19:15.900 11.636 - 11.695: 87.0424% ( 218) 00:19:15.900 11.695 - 11.753: 89.1804% ( 198) 00:19:15.900 11.753 - 11.811: 90.8973% ( 159) 00:19:15.900 11.811 - 11.869: 92.0095% ( 103) 00:19:15.900 11.869 - 11.927: 92.7870% ( 72) 00:19:15.900 11.927 - 11.985: 93.1541% ( 34) 00:19:15.900 11.985 - 12.044: 93.2729% ( 11) 00:19:15.900 12.044 - 12.102: 93.4024% ( 12) 00:19:15.900 12.102 - 12.160: 93.5428% ( 13) 00:19:15.900 12.160 - 12.218: 93.6724% ( 12) 00:19:15.900 12.218 - 12.276: 93.8236% ( 14) 00:19:15.900 12.276 - 12.335: 94.0071% ( 17) 00:19:15.900 12.335 - 12.393: 94.2123% ( 19) 00:19:15.900 12.393 - 12.451: 94.5686% ( 33) 00:19:15.900 12.451 - 12.509: 94.8062% ( 22) 00:19:15.900 12.509 - 12.567: 95.2057% ( 37) 00:19:15.900 12.567 - 12.625: 95.5620% ( 33) 00:19:15.900 12.625 - 12.684: 95.8320% ( 25) 00:19:15.900 12.684 - 12.742: 96.0911% ( 24) 00:19:15.900 12.742 - 12.800: 96.2423% ( 14) 00:19:15.900 12.800 - 12.858: 96.3071% ( 6) 00:19:15.900 12.858 - 12.916: 96.3827% ( 7) 00:19:15.900 12.916 - 12.975: 96.4367% ( 5) 00:19:15.900 12.975 - 13.033: 96.4583% ( 2) 00:19:15.900 13.033 - 13.091: 96.4799% ( 2) 00:19:15.900 13.091 - 13.149: 96.5015% ( 2) 00:19:15.900 13.149 - 13.207: 96.5231% ( 2) 00:19:15.900 13.265 - 13.324: 96.5446% ( 2) 00:19:15.900 13.324 - 13.382: 96.5662% ( 2) 00:19:15.900 13.382 - 13.440: 96.5770% ( 1) 00:19:15.900 13.440 - 13.498: 96.6094% ( 3) 00:19:15.900 13.498 - 13.556: 96.6202% ( 1) 00:19:15.900 13.556 - 13.615: 96.6526% ( 3) 00:19:15.900 13.615 - 13.673: 96.6634% ( 1) 00:19:15.900 13.673 - 13.731: 96.6742% ( 1) 00:19:15.900 13.731 - 13.789: 96.6850% ( 1) 00:19:15.900 13.847 - 13.905: 96.6958% ( 1) 00:19:15.900 13.905 - 13.964: 96.7066% ( 1) 00:19:15.900 13.964 - 14.022: 96.7282% ( 2) 00:19:15.900 14.022 - 14.080: 96.7498% ( 2) 00:19:15.900 14.080 - 14.138: 96.7714% ( 2) 00:19:15.900 14.138 - 14.196: 96.7930% ( 2) 00:19:15.900 14.196 - 14.255: 96.8038% ( 1) 00:19:15.900 14.255 - 14.313: 96.8362% ( 3) 00:19:15.900 14.313 - 14.371: 96.8794% ( 4) 00:19:15.900 14.371 - 14.429: 96.9118% ( 3) 00:19:15.900 14.429 - 14.487: 96.9226% ( 1) 00:19:15.900 14.487 - 14.545: 96.9874% ( 6) 00:19:15.900 14.662 - 14.720: 97.0090% ( 2) 00:19:15.900 14.720 - 14.778: 97.0306% ( 2) 00:19:15.900 14.778 - 14.836: 97.0630% ( 3) 00:19:15.900 14.836 - 14.895: 97.1061% ( 4) 00:19:15.900 14.895 - 15.011: 97.1385% ( 3) 00:19:15.900 15.011 - 15.127: 97.1925% ( 5) 00:19:15.900 15.127 - 15.244: 97.2249% ( 3) 00:19:15.900 15.244 - 15.360: 97.2789% ( 5) 00:19:15.900 15.360 - 15.476: 97.3113% ( 3) 00:19:15.900 15.476 - 15.593: 97.3869% ( 7) 00:19:15.900 15.593 - 15.709: 97.4949% ( 10) 00:19:15.900 15.709 - 15.825: 97.5165% ( 2) 00:19:15.900 15.825 - 15.942: 97.5705% ( 5) 00:19:15.900 15.942 - 16.058: 97.6892% ( 11) 00:19:15.900 16.058 - 16.175: 97.7324% ( 4) 00:19:15.900 16.175 - 16.291: 97.7864% ( 5) 00:19:15.900 16.291 - 16.407: 97.8404% ( 5) 00:19:15.900 16.407 - 16.524: 97.9160% ( 7) 00:19:15.900 16.524 - 16.640: 98.0132% ( 9) 00:19:15.900 16.640 - 16.756: 98.0888% ( 7) 00:19:15.900 16.756 - 16.873: 98.1643% ( 7) 00:19:15.900 16.873 - 16.989: 98.2723% ( 10) 00:19:15.900 16.989 - 17.105: 98.3047% ( 3) 00:19:15.900 17.105 - 17.222: 98.3587% ( 5) 00:19:15.900 17.222 - 17.338: 98.4019% ( 4) 00:19:15.900 17.338 - 17.455: 98.4775% ( 7) 00:19:15.900 17.455 - 17.571: 98.5099% ( 3) 00:19:15.900 17.571 - 17.687: 98.5747% ( 6) 00:19:15.900 17.687 - 17.804: 98.6179% ( 4) 00:19:15.900 17.804 - 17.920: 98.6287% ( 1) 00:19:15.900 17.920 - 18.036: 98.6503% ( 2) 00:19:15.900 18.036 - 18.153: 98.6611% ( 1) 00:19:15.900 18.153 - 18.269: 98.6718% ( 1) 00:19:15.900 18.269 - 18.385: 98.7042% ( 3) 00:19:15.900 18.502 - 18.618: 98.7258% ( 2) 00:19:15.900 18.618 - 18.735: 98.7474% ( 2) 00:19:15.900 18.735 - 18.851: 98.7798% ( 3) 00:19:15.900 18.851 - 18.967: 98.8014% ( 2) 00:19:15.900 18.967 - 19.084: 98.8662% ( 6) 00:19:15.900 19.084 - 19.200: 98.8986% ( 3) 00:19:15.900 19.200 - 19.316: 98.9202% ( 2) 00:19:15.900 19.316 - 19.433: 98.9526% ( 3) 00:19:15.900 19.433 - 19.549: 98.9634% ( 1) 00:19:15.900 19.549 - 19.665: 98.9958% ( 3) 00:19:15.900 19.665 - 19.782: 99.0390% ( 4) 00:19:15.900 20.015 - 20.131: 99.0606% ( 2) 00:19:15.900 20.364 - 20.480: 99.0930% ( 3) 00:19:15.900 20.480 - 20.596: 99.1146% ( 2) 00:19:15.900 20.596 - 20.713: 99.1254% ( 1) 00:19:15.900 20.713 - 20.829: 99.1578% ( 3) 00:19:15.900 20.829 - 20.945: 99.1794% ( 2) 00:19:15.900 21.062 - 21.178: 99.2010% ( 2) 00:19:15.900 21.178 - 21.295: 99.2225% ( 2) 00:19:15.900 21.527 - 21.644: 99.2333% ( 1) 00:19:15.900 21.876 - 21.993: 99.2441% ( 1) 00:19:15.900 21.993 - 22.109: 99.2549% ( 1) 00:19:15.900 22.109 - 22.225: 99.2657% ( 1) 00:19:15.900 22.225 - 22.342: 99.2873% ( 2) 00:19:15.900 22.342 - 22.458: 99.2981% ( 1) 00:19:15.900 22.458 - 22.575: 99.3089% ( 1) 00:19:15.900 22.807 - 22.924: 99.3197% ( 1) 00:19:15.900 23.156 - 23.273: 99.3305% ( 1) 00:19:15.900 23.273 - 23.389: 99.3413% ( 1) 00:19:15.900 23.505 - 23.622: 99.3521% ( 1) 00:19:15.900 23.622 - 23.738: 99.3629% ( 1) 00:19:15.900 23.855 - 23.971: 99.3845% ( 2) 00:19:15.900 23.971 - 24.087: 99.4169% ( 3) 00:19:15.900 24.204 - 24.320: 99.4385% ( 2) 00:19:15.900 24.320 - 24.436: 99.4601% ( 2) 00:19:15.900 24.436 - 24.553: 99.4925% ( 3) 00:19:15.900 24.553 - 24.669: 99.5465% ( 5) 00:19:15.900 24.669 - 24.785: 99.5789% ( 3) 00:19:15.900 24.785 - 24.902: 99.6005% ( 2) 00:19:15.900 24.902 - 25.018: 99.6221% ( 2) 00:19:15.900 25.018 - 25.135: 99.6437% ( 2) 00:19:15.900 25.135 - 25.251: 99.6761% ( 3) 00:19:15.900 25.251 - 25.367: 99.6977% ( 2) 00:19:15.900 25.367 - 25.484: 99.7085% ( 1) 00:19:15.900 25.484 - 25.600: 99.7193% ( 1) 00:19:15.900 25.716 - 25.833: 99.7301% ( 1) 00:19:15.900 25.833 - 25.949: 99.7408% ( 1) 00:19:15.900 25.949 - 26.065: 99.7624% ( 2) 00:19:15.900 26.182 - 26.298: 99.7948% ( 3) 00:19:15.900 26.415 - 26.531: 99.8056% ( 1) 00:19:15.900 26.647 - 26.764: 99.8272% ( 2) 00:19:15.900 26.764 - 26.880: 99.8380% ( 1) 00:19:15.900 26.880 - 26.996: 99.8488% ( 1) 00:19:15.900 27.578 - 27.695: 99.8596% ( 1) 00:19:15.900 27.811 - 27.927: 99.8704% ( 1) 00:19:15.900 29.091 - 29.207: 99.8812% ( 1) 00:19:15.900 29.207 - 29.324: 99.8920% ( 1) 00:19:15.900 31.651 - 31.884: 99.9028% ( 1) 00:19:15.900 37.702 - 37.935: 99.9136% ( 1) 00:19:15.900 39.098 - 39.331: 99.9244% ( 1) 00:19:15.900 39.331 - 39.564: 99.9352% ( 1) 00:19:15.900 39.564 - 39.796: 99.9460% ( 1) 00:19:15.900 40.262 - 40.495: 99.9676% ( 2) 00:19:15.900 40.727 - 40.960: 99.9784% ( 1) 00:19:15.900 67.025 - 67.491: 99.9892% ( 1) 00:19:15.900 126.604 - 127.535: 100.0000% ( 1) 00:19:15.901 00:19:15.901 00:19:15.901 real 0m1.351s 00:19:15.901 user 0m1.124s 00:19:15.901 sys 0m0.172s 00:19:15.901 11:33:21 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:15.901 11:33:21 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:19:15.901 ************************************ 00:19:15.901 END TEST nvme_overhead 00:19:15.901 ************************************ 00:19:16.159 11:33:21 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:19:16.159 11:33:21 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:19:16.159 11:33:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:16.159 11:33:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:16.159 ************************************ 00:19:16.159 START TEST nvme_arbitration 00:19:16.159 ************************************ 00:19:16.159 11:33:21 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:19:19.439 Initializing NVMe Controllers 00:19:19.439 Attached to 0000:00:10.0 00:19:19.439 Attached to 0000:00:11.0 00:19:19.439 Attached to 0000:00:13.0 00:19:19.439 Attached to 0000:00:12.0 00:19:19.439 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:19:19.439 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:19:19.439 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:19:19.439 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:19:19.439 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:19:19.439 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:19:19.439 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:19:19.439 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:19:19.439 Initialization complete. Launching workers. 00:19:19.439 Starting thread on core 1 with urgent priority queue 00:19:19.439 Starting thread on core 2 with urgent priority queue 00:19:19.439 Starting thread on core 0 with urgent priority queue 00:19:19.439 Starting thread on core 3 with urgent priority queue 00:19:19.439 QEMU NVMe Ctrl (12340 ) core 0: 554.67 IO/s 180.29 secs/100000 ios 00:19:19.439 QEMU NVMe Ctrl (12342 ) core 0: 554.67 IO/s 180.29 secs/100000 ios 00:19:19.439 QEMU NVMe Ctrl (12341 ) core 1: 640.00 IO/s 156.25 secs/100000 ios 00:19:19.439 QEMU NVMe Ctrl (12342 ) core 1: 640.00 IO/s 156.25 secs/100000 ios 00:19:19.439 QEMU NVMe Ctrl (12343 ) core 2: 661.33 IO/s 151.21 secs/100000 ios 00:19:19.439 QEMU NVMe Ctrl (12342 ) core 3: 725.33 IO/s 137.87 secs/100000 ios 00:19:19.439 ======================================================== 00:19:19.439 00:19:19.439 00:19:19.439 real 0m3.457s 00:19:19.439 user 0m9.341s 00:19:19.439 sys 0m0.192s 00:19:19.439 ************************************ 00:19:19.439 END TEST nvme_arbitration 00:19:19.439 ************************************ 00:19:19.439 11:33:25 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:19.439 11:33:25 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:19:19.439 11:33:25 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:19:19.439 11:33:25 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:19.439 11:33:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:19.439 11:33:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:19.439 ************************************ 00:19:19.439 START TEST nvme_single_aen 00:19:19.439 ************************************ 00:19:19.439 11:33:25 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:19:20.005 Asynchronous Event Request test 00:19:20.005 Attached to 0000:00:10.0 00:19:20.005 Attached to 0000:00:11.0 00:19:20.005 Attached to 0000:00:13.0 00:19:20.005 Attached to 0000:00:12.0 00:19:20.005 Reset controller to setup AER completions for this process 00:19:20.005 Registering asynchronous event callbacks... 00:19:20.005 Getting orig temperature thresholds of all controllers 00:19:20.005 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:20.005 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:20.005 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:20.005 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:20.005 Setting all controllers temperature threshold low to trigger AER 00:19:20.005 Waiting for all controllers temperature threshold to be set lower 00:19:20.005 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:20.005 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:19:20.005 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:20.005 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:19:20.005 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:20.005 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:19:20.005 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:20.005 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:19:20.005 Waiting for all controllers to trigger AER and reset threshold 00:19:20.005 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:20.005 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:20.005 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:20.005 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:20.005 Cleaning up... 00:19:20.005 ************************************ 00:19:20.005 END TEST nvme_single_aen 00:19:20.005 ************************************ 00:19:20.005 00:19:20.005 real 0m0.332s 00:19:20.005 user 0m0.119s 00:19:20.005 sys 0m0.153s 00:19:20.005 11:33:25 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:20.005 11:33:25 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:19:20.005 11:33:25 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:19:20.005 11:33:25 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:20.005 11:33:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:20.005 11:33:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:20.005 ************************************ 00:19:20.005 START TEST nvme_doorbell_aers 00:19:20.005 ************************************ 00:19:20.005 11:33:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:19:20.005 11:33:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:19:20.005 11:33:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:19:20.005 11:33:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:19:20.005 11:33:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:19:20.005 11:33:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:19:20.005 11:33:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:19:20.005 11:33:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:20.005 11:33:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:19:20.005 11:33:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:20.005 11:33:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:19:20.005 11:33:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:19:20.005 11:33:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:19:20.005 11:33:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:19:20.263 [2024-11-20 11:33:25.967437] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64757) is not found. Dropping the request. 00:19:30.248 Executing: test_write_invalid_db 00:19:30.248 Waiting for AER completion... 00:19:30.248 Failure: test_write_invalid_db 00:19:30.248 00:19:30.248 Executing: test_invalid_db_write_overflow_sq 00:19:30.248 Waiting for AER completion... 00:19:30.248 Failure: test_invalid_db_write_overflow_sq 00:19:30.248 00:19:30.248 Executing: test_invalid_db_write_overflow_cq 00:19:30.248 Waiting for AER completion... 00:19:30.248 Failure: test_invalid_db_write_overflow_cq 00:19:30.248 00:19:30.248 11:33:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:19:30.248 11:33:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:19:30.248 [2024-11-20 11:33:35.993748] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64757) is not found. Dropping the request. 00:19:40.221 Executing: test_write_invalid_db 00:19:40.221 Waiting for AER completion... 00:19:40.221 Failure: test_write_invalid_db 00:19:40.221 00:19:40.221 Executing: test_invalid_db_write_overflow_sq 00:19:40.221 Waiting for AER completion... 00:19:40.221 Failure: test_invalid_db_write_overflow_sq 00:19:40.221 00:19:40.221 Executing: test_invalid_db_write_overflow_cq 00:19:40.221 Waiting for AER completion... 00:19:40.221 Failure: test_invalid_db_write_overflow_cq 00:19:40.221 00:19:40.221 11:33:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:19:40.221 11:33:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:19:40.507 [2024-11-20 11:33:46.056083] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64757) is not found. Dropping the request. 00:19:50.473 Executing: test_write_invalid_db 00:19:50.473 Waiting for AER completion... 00:19:50.473 Failure: test_write_invalid_db 00:19:50.473 00:19:50.473 Executing: test_invalid_db_write_overflow_sq 00:19:50.473 Waiting for AER completion... 00:19:50.473 Failure: test_invalid_db_write_overflow_sq 00:19:50.473 00:19:50.473 Executing: test_invalid_db_write_overflow_cq 00:19:50.473 Waiting for AER completion... 00:19:50.473 Failure: test_invalid_db_write_overflow_cq 00:19:50.473 00:19:50.473 11:33:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:19:50.473 11:33:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:19:50.473 [2024-11-20 11:33:56.112395] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64757) is not found. Dropping the request. 00:20:00.578 Executing: test_write_invalid_db 00:20:00.578 Waiting for AER completion... 00:20:00.578 Failure: test_write_invalid_db 00:20:00.578 00:20:00.578 Executing: test_invalid_db_write_overflow_sq 00:20:00.578 Waiting for AER completion... 00:20:00.578 Failure: test_invalid_db_write_overflow_sq 00:20:00.578 00:20:00.578 Executing: test_invalid_db_write_overflow_cq 00:20:00.578 Waiting for AER completion... 00:20:00.578 Failure: test_invalid_db_write_overflow_cq 00:20:00.578 00:20:00.578 00:20:00.578 real 0m40.255s 00:20:00.578 user 0m34.239s 00:20:00.578 sys 0m5.583s 00:20:00.578 11:34:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:00.578 11:34:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:20:00.578 ************************************ 00:20:00.578 END TEST nvme_doorbell_aers 00:20:00.578 ************************************ 00:20:00.578 11:34:05 nvme -- nvme/nvme.sh@97 -- # uname 00:20:00.578 11:34:05 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:20:00.578 11:34:05 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:20:00.578 11:34:05 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:20:00.578 11:34:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:00.578 11:34:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:20:00.578 ************************************ 00:20:00.578 START TEST nvme_multi_aen 00:20:00.578 ************************************ 00:20:00.578 11:34:05 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:20:00.578 [2024-11-20 11:34:06.129105] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64757) is not found. Dropping the request. 00:20:00.578 [2024-11-20 11:34:06.129218] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64757) is not found. Dropping the request. 00:20:00.578 [2024-11-20 11:34:06.129244] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64757) is not found. Dropping the request. 00:20:00.578 [2024-11-20 11:34:06.131200] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64757) is not found. Dropping the request. 00:20:00.578 [2024-11-20 11:34:06.131317] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64757) is not found. Dropping the request. 00:20:00.578 [2024-11-20 11:34:06.131358] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64757) is not found. Dropping the request. 00:20:00.578 [2024-11-20 11:34:06.133695] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64757) is not found. Dropping the request. 00:20:00.578 [2024-11-20 11:34:06.134058] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64757) is not found. Dropping the request. 00:20:00.578 [2024-11-20 11:34:06.134115] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64757) is not found. Dropping the request. 00:20:00.578 [2024-11-20 11:34:06.136358] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64757) is not found. Dropping the request. 00:20:00.578 [2024-11-20 11:34:06.136451] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64757) is not found. Dropping the request. 00:20:00.578 [2024-11-20 11:34:06.136489] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64757) is not found. Dropping the request. 00:20:00.578 Child process pid: 65278 00:20:00.836 [Child] Asynchronous Event Request test 00:20:00.836 [Child] Attached to 0000:00:10.0 00:20:00.836 [Child] Attached to 0000:00:11.0 00:20:00.836 [Child] Attached to 0000:00:13.0 00:20:00.836 [Child] Attached to 0000:00:12.0 00:20:00.836 [Child] Registering asynchronous event callbacks... 00:20:00.836 [Child] Getting orig temperature thresholds of all controllers 00:20:00.836 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:20:00.836 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:20:00.836 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:20:00.836 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:20:00.836 [Child] Waiting for all controllers to trigger AER and reset threshold 00:20:00.836 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:20:00.836 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:20:00.836 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:20:00.836 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:20:00.836 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:20:00.836 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:20:00.836 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:20:00.836 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:20:00.836 [Child] Cleaning up... 00:20:00.836 Asynchronous Event Request test 00:20:00.836 Attached to 0000:00:10.0 00:20:00.836 Attached to 0000:00:11.0 00:20:00.836 Attached to 0000:00:13.0 00:20:00.836 Attached to 0000:00:12.0 00:20:00.836 Reset controller to setup AER completions for this process 00:20:00.836 Registering asynchronous event callbacks... 00:20:00.836 Getting orig temperature thresholds of all controllers 00:20:00.836 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:20:00.836 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:20:00.836 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:20:00.836 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:20:00.836 Setting all controllers temperature threshold low to trigger AER 00:20:00.836 Waiting for all controllers temperature threshold to be set lower 00:20:00.836 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:20:00.836 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:20:00.836 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:20:00.836 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:20:00.836 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:20:00.836 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:20:00.836 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:20:00.836 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:20:00.836 Waiting for all controllers to trigger AER and reset threshold 00:20:00.836 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:20:00.836 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:20:00.836 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:20:00.836 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:20:00.836 Cleaning up... 00:20:00.836 00:20:00.836 real 0m0.629s 00:20:00.836 user 0m0.227s 00:20:00.836 sys 0m0.287s 00:20:00.836 11:34:06 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:00.836 11:34:06 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:20:00.836 ************************************ 00:20:00.836 END TEST nvme_multi_aen 00:20:00.836 ************************************ 00:20:00.836 11:34:06 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:20:00.836 11:34:06 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:00.836 11:34:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:00.836 11:34:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:20:00.836 ************************************ 00:20:00.836 START TEST nvme_startup 00:20:00.836 ************************************ 00:20:00.836 11:34:06 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:20:01.094 Initializing NVMe Controllers 00:20:01.094 Attached to 0000:00:10.0 00:20:01.094 Attached to 0000:00:11.0 00:20:01.094 Attached to 0000:00:13.0 00:20:01.094 Attached to 0000:00:12.0 00:20:01.094 Initialization complete. 00:20:01.094 Time used:202061.875 (us). 00:20:01.094 ************************************ 00:20:01.094 END TEST nvme_startup 00:20:01.094 ************************************ 00:20:01.094 00:20:01.094 real 0m0.292s 00:20:01.094 user 0m0.109s 00:20:01.094 sys 0m0.140s 00:20:01.094 11:34:06 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:01.094 11:34:06 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:20:01.352 11:34:06 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:20:01.352 11:34:06 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:01.352 11:34:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:01.352 11:34:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:20:01.352 ************************************ 00:20:01.352 START TEST nvme_multi_secondary 00:20:01.352 ************************************ 00:20:01.352 11:34:06 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:20:01.352 11:34:06 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65334 00:20:01.352 11:34:06 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:20:01.352 11:34:06 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65335 00:20:01.352 11:34:06 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:20:01.352 11:34:06 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:20:04.634 Initializing NVMe Controllers 00:20:04.634 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:20:04.634 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:20:04.634 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:20:04.634 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:20:04.634 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:20:04.634 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:20:04.634 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:20:04.635 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:20:04.635 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:20:04.635 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:20:04.635 Initialization complete. Launching workers. 00:20:04.635 ======================================================== 00:20:04.635 Latency(us) 00:20:04.635 Device Information : IOPS MiB/s Average min max 00:20:04.635 PCIE (0000:00:10.0) NSID 1 from core 1: 5958.10 23.27 2683.63 924.14 6881.02 00:20:04.635 PCIE (0000:00:11.0) NSID 1 from core 1: 5958.10 23.27 2685.28 948.17 6766.87 00:20:04.635 PCIE (0000:00:13.0) NSID 1 from core 1: 5958.10 23.27 2685.34 936.51 6536.99 00:20:04.635 PCIE (0000:00:12.0) NSID 1 from core 1: 5958.10 23.27 2685.55 934.62 5979.14 00:20:04.635 PCIE (0000:00:12.0) NSID 2 from core 1: 5958.10 23.27 2685.61 945.40 5914.96 00:20:04.635 PCIE (0000:00:12.0) NSID 3 from core 1: 5958.10 23.27 2685.66 946.89 7130.57 00:20:04.635 ======================================================== 00:20:04.635 Total : 35748.62 139.64 2685.18 924.14 7130.57 00:20:04.635 00:20:04.635 Initializing NVMe Controllers 00:20:04.635 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:20:04.635 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:20:04.635 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:20:04.635 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:20:04.635 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:20:04.635 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:20:04.635 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:20:04.635 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:20:04.635 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:20:04.635 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:20:04.635 Initialization complete. Launching workers. 00:20:04.635 ======================================================== 00:20:04.635 Latency(us) 00:20:04.635 Device Information : IOPS MiB/s Average min max 00:20:04.635 PCIE (0000:00:10.0) NSID 1 from core 2: 2367.44 9.25 6756.54 1752.88 17178.41 00:20:04.635 PCIE (0000:00:11.0) NSID 1 from core 2: 2367.44 9.25 6757.98 1835.01 18067.42 00:20:04.635 PCIE (0000:00:13.0) NSID 1 from core 2: 2367.44 9.25 6759.09 1529.49 17665.69 00:20:04.635 PCIE (0000:00:12.0) NSID 1 from core 2: 2367.44 9.25 6767.00 1657.61 17752.77 00:20:04.635 PCIE (0000:00:12.0) NSID 2 from core 2: 2367.44 9.25 6767.04 1769.11 14609.13 00:20:04.635 PCIE (0000:00:12.0) NSID 3 from core 2: 2367.44 9.25 6766.88 1856.73 14732.19 00:20:04.635 ======================================================== 00:20:04.635 Total : 14204.67 55.49 6762.42 1529.49 18067.42 00:20:04.635 00:20:04.893 11:34:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65334 00:20:06.795 Initializing NVMe Controllers 00:20:06.795 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:20:06.795 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:20:06.795 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:20:06.795 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:20:06.795 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:20:06.795 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:20:06.795 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:20:06.795 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:20:06.795 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:20:06.795 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:20:06.795 Initialization complete. Launching workers. 00:20:06.795 ======================================================== 00:20:06.795 Latency(us) 00:20:06.795 Device Information : IOPS MiB/s Average min max 00:20:06.795 PCIE (0000:00:10.0) NSID 1 from core 0: 7806.94 30.50 2047.69 948.68 7951.70 00:20:06.795 PCIE (0000:00:11.0) NSID 1 from core 0: 7806.94 30.50 2048.90 968.11 7590.41 00:20:06.795 PCIE (0000:00:13.0) NSID 1 from core 0: 7806.94 30.50 2048.82 967.49 8251.45 00:20:06.795 PCIE (0000:00:12.0) NSID 1 from core 0: 7806.94 30.50 2048.74 955.92 7946.07 00:20:06.795 PCIE (0000:00:12.0) NSID 2 from core 0: 7806.94 30.50 2048.66 926.40 8046.44 00:20:06.795 PCIE (0000:00:12.0) NSID 3 from core 0: 7806.94 30.50 2048.57 878.58 7879.84 00:20:06.795 ======================================================== 00:20:06.795 Total : 46841.65 182.98 2048.56 878.58 8251.45 00:20:06.795 00:20:06.795 11:34:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65335 00:20:06.795 11:34:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65410 00:20:06.795 11:34:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:20:06.795 11:34:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65411 00:20:06.795 11:34:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:20:06.795 11:34:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:20:10.078 Initializing NVMe Controllers 00:20:10.078 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:20:10.078 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:20:10.078 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:20:10.078 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:20:10.078 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:20:10.078 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:20:10.078 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:20:10.078 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:20:10.078 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:20:10.078 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:20:10.078 Initialization complete. Launching workers. 00:20:10.078 ======================================================== 00:20:10.078 Latency(us) 00:20:10.078 Device Information : IOPS MiB/s Average min max 00:20:10.078 PCIE (0000:00:10.0) NSID 1 from core 1: 5468.16 21.36 2924.02 1053.68 7944.91 00:20:10.078 PCIE (0000:00:11.0) NSID 1 from core 1: 5468.16 21.36 2925.47 1092.23 8125.52 00:20:10.078 PCIE (0000:00:13.0) NSID 1 from core 1: 5468.16 21.36 2925.37 1109.81 7948.64 00:20:10.078 PCIE (0000:00:12.0) NSID 1 from core 1: 5468.16 21.36 2925.39 1092.53 7439.38 00:20:10.078 PCIE (0000:00:12.0) NSID 2 from core 1: 5473.49 21.38 2922.45 1079.12 7019.54 00:20:10.078 PCIE (0000:00:12.0) NSID 3 from core 1: 5473.49 21.38 2922.33 1066.13 7101.11 00:20:10.078 ======================================================== 00:20:10.078 Total : 32819.64 128.20 2924.17 1053.68 8125.52 00:20:10.078 00:20:10.337 Initializing NVMe Controllers 00:20:10.337 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:20:10.337 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:20:10.337 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:20:10.337 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:20:10.337 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:20:10.337 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:20:10.337 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:20:10.337 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:20:10.337 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:20:10.337 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:20:10.337 Initialization complete. Launching workers. 00:20:10.337 ======================================================== 00:20:10.337 Latency(us) 00:20:10.337 Device Information : IOPS MiB/s Average min max 00:20:10.337 PCIE (0000:00:10.0) NSID 1 from core 0: 5373.24 20.99 2975.69 1007.08 20713.05 00:20:10.337 PCIE (0000:00:11.0) NSID 1 from core 0: 5373.24 20.99 2977.15 1028.91 20443.15 00:20:10.337 PCIE (0000:00:13.0) NSID 1 from core 0: 5373.24 20.99 2977.00 1036.19 20012.10 00:20:10.337 PCIE (0000:00:12.0) NSID 1 from core 0: 5373.24 20.99 2976.84 1031.61 20123.71 00:20:10.337 PCIE (0000:00:12.0) NSID 2 from core 0: 5373.24 20.99 2976.69 1022.65 20602.53 00:20:10.337 PCIE (0000:00:12.0) NSID 3 from core 0: 5373.24 20.99 2976.53 1037.20 20631.87 00:20:10.337 ======================================================== 00:20:10.337 Total : 32239.42 125.94 2976.65 1007.08 20713.05 00:20:10.337 00:20:12.869 Initializing NVMe Controllers 00:20:12.869 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:20:12.869 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:20:12.869 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:20:12.869 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:20:12.869 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:20:12.869 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:20:12.869 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:20:12.869 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:20:12.869 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:20:12.869 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:20:12.869 Initialization complete. Launching workers. 00:20:12.869 ======================================================== 00:20:12.869 Latency(us) 00:20:12.869 Device Information : IOPS MiB/s Average min max 00:20:12.869 PCIE (0000:00:10.0) NSID 1 from core 2: 3778.99 14.76 4231.43 975.70 14850.45 00:20:12.869 PCIE (0000:00:11.0) NSID 1 from core 2: 3778.99 14.76 4233.39 983.46 18739.45 00:20:12.869 PCIE (0000:00:13.0) NSID 1 from core 2: 3778.99 14.76 4232.88 955.75 16953.34 00:20:12.869 PCIE (0000:00:12.0) NSID 1 from core 2: 3778.99 14.76 4233.21 969.95 14640.96 00:20:12.869 PCIE (0000:00:12.0) NSID 2 from core 2: 3778.99 14.76 4232.48 982.74 14481.41 00:20:12.869 PCIE (0000:00:12.0) NSID 3 from core 2: 3778.99 14.76 4229.62 934.73 14925.53 00:20:12.869 ======================================================== 00:20:12.869 Total : 22673.92 88.57 4232.17 934.73 18739.45 00:20:12.869 00:20:12.869 ************************************ 00:20:12.869 END TEST nvme_multi_secondary 00:20:12.869 ************************************ 00:20:12.869 11:34:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65410 00:20:12.869 11:34:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65411 00:20:12.869 00:20:12.869 real 0m11.392s 00:20:12.869 user 0m18.629s 00:20:12.869 sys 0m1.068s 00:20:12.869 11:34:18 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:12.869 11:34:18 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:20:12.869 11:34:18 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:20:12.869 11:34:18 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:20:12.869 11:34:18 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64332 ]] 00:20:12.869 11:34:18 nvme -- common/autotest_common.sh@1094 -- # kill 64332 00:20:12.869 11:34:18 nvme -- common/autotest_common.sh@1095 -- # wait 64332 00:20:12.869 [2024-11-20 11:34:18.327783] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65277) is not found. Dropping the request. 00:20:12.870 [2024-11-20 11:34:18.327878] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65277) is not found. Dropping the request. 00:20:12.870 [2024-11-20 11:34:18.327937] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65277) is not found. Dropping the request. 00:20:12.870 [2024-11-20 11:34:18.327970] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65277) is not found. Dropping the request. 00:20:12.870 [2024-11-20 11:34:18.331598] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65277) is not found. Dropping the request. 00:20:12.870 [2024-11-20 11:34:18.331689] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65277) is not found. Dropping the request. 00:20:12.870 [2024-11-20 11:34:18.331724] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65277) is not found. Dropping the request. 00:20:12.870 [2024-11-20 11:34:18.331755] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65277) is not found. Dropping the request. 00:20:12.870 [2024-11-20 11:34:18.335403] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65277) is not found. Dropping the request. 00:20:12.870 [2024-11-20 11:34:18.335876] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65277) is not found. Dropping the request. 00:20:12.870 [2024-11-20 11:34:18.335919] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65277) is not found. Dropping the request. 00:20:12.870 [2024-11-20 11:34:18.335955] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65277) is not found. Dropping the request. 00:20:12.870 [2024-11-20 11:34:18.339761] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65277) is not found. Dropping the request. 00:20:12.870 [2024-11-20 11:34:18.340381] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65277) is not found. Dropping the request. 00:20:12.870 [2024-11-20 11:34:18.341057] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65277) is not found. Dropping the request. 00:20:12.870 [2024-11-20 11:34:18.341708] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65277) is not found. Dropping the request. 00:20:12.870 [2024-11-20 11:34:18.610475] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:20:12.870 11:34:18 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:20:12.870 11:34:18 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:20:12.870 11:34:18 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:20:12.870 11:34:18 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:12.870 11:34:18 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:12.870 11:34:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:20:13.128 ************************************ 00:20:13.128 START TEST bdev_nvme_reset_stuck_adm_cmd 00:20:13.128 ************************************ 00:20:13.128 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:20:13.128 * Looking for test storage... 00:20:13.128 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:20:13.128 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:13.128 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:20:13.128 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:13.128 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:13.128 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:13.128 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:13.128 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:13.128 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:20:13.128 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:20:13.128 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:20:13.128 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:20:13.128 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:20:13.128 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:20:13.128 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:20:13.128 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:13.128 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:20:13.128 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:20:13.128 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:13.128 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:13.128 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:20:13.128 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:20:13.128 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:13.128 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:20:13.128 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:20:13.128 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:20:13.128 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:20:13.128 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:13.128 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:13.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.129 --rc genhtml_branch_coverage=1 00:20:13.129 --rc genhtml_function_coverage=1 00:20:13.129 --rc genhtml_legend=1 00:20:13.129 --rc geninfo_all_blocks=1 00:20:13.129 --rc geninfo_unexecuted_blocks=1 00:20:13.129 00:20:13.129 ' 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:13.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.129 --rc genhtml_branch_coverage=1 00:20:13.129 --rc genhtml_function_coverage=1 00:20:13.129 --rc genhtml_legend=1 00:20:13.129 --rc geninfo_all_blocks=1 00:20:13.129 --rc geninfo_unexecuted_blocks=1 00:20:13.129 00:20:13.129 ' 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:13.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.129 --rc genhtml_branch_coverage=1 00:20:13.129 --rc genhtml_function_coverage=1 00:20:13.129 --rc genhtml_legend=1 00:20:13.129 --rc geninfo_all_blocks=1 00:20:13.129 --rc geninfo_unexecuted_blocks=1 00:20:13.129 00:20:13.129 ' 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:13.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.129 --rc genhtml_branch_coverage=1 00:20:13.129 --rc genhtml_function_coverage=1 00:20:13.129 --rc genhtml_legend=1 00:20:13.129 --rc geninfo_all_blocks=1 00:20:13.129 --rc geninfo_unexecuted_blocks=1 00:20:13.129 00:20:13.129 ' 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:20:13.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65578 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65578 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65578 ']' 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:13.129 11:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:20:13.387 [2024-11-20 11:34:18.998479] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:20:13.387 [2024-11-20 11:34:18.998937] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65578 ] 00:20:13.649 [2024-11-20 11:34:19.234635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:13.908 [2024-11-20 11:34:19.426505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.908 [2024-11-20 11:34:19.426594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:13.908 [2024-11-20 11:34:19.426655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.908 [2024-11-20 11:34:19.426658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:14.842 11:34:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:14.842 11:34:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:20:14.843 11:34:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:20:14.843 11:34:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.843 11:34:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:20:14.843 nvme0n1 00:20:14.843 11:34:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.843 11:34:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:20:14.843 11:34:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_DRNYi.txt 00:20:14.843 11:34:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:20:14.843 11:34:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.843 11:34:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:20:14.843 true 00:20:14.843 11:34:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.843 11:34:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:20:14.843 11:34:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732102460 00:20:14.843 11:34:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65601 00:20:14.843 11:34:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:14.843 11:34:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:20:14.843 11:34:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:20:16.743 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:16.743 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.743 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:20:16.743 [2024-11-20 11:34:22.413053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:20:16.743 [2024-11-20 11:34:22.413589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:16.743 [2024-11-20 11:34:22.413639] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:20:16.743 [2024-11-20 11:34:22.413662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.743 [2024-11-20 11:34:22.415872] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:20:16.743 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65601 00:20:16.743 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.743 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65601 00:20:16.743 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65601 00:20:16.743 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:20:16.743 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:20:16.743 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:16.743 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.743 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:20:16.743 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.743 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:20:16.743 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_DRNYi.txt 00:20:17.002 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:20:17.002 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:20:17.002 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:20:17.002 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:20:17.002 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:20:17.002 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:20:17.002 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:20:17.002 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:20:17.002 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:20:17.002 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:20:17.002 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:20:17.002 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:20:17.002 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:20:17.002 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:20:17.002 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:20:17.002 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:20:17.002 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:20:17.002 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:20:17.002 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:20:17.002 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_DRNYi.txt 00:20:17.002 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65578 00:20:17.002 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65578 ']' 00:20:17.002 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65578 00:20:17.002 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:20:17.002 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:17.002 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65578 00:20:17.002 killing process with pid 65578 00:20:17.002 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:17.002 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:17.002 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65578' 00:20:17.002 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65578 00:20:17.002 11:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65578 00:20:19.535 11:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:20:19.535 11:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:20:19.535 ************************************ 00:20:19.535 END TEST bdev_nvme_reset_stuck_adm_cmd 00:20:19.535 ************************************ 00:20:19.535 00:20:19.535 real 0m6.217s 00:20:19.535 user 0m21.787s 00:20:19.535 sys 0m0.781s 00:20:19.535 11:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:19.535 11:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:20:19.535 11:34:24 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:20:19.535 11:34:24 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:20:19.535 11:34:24 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:19.535 11:34:24 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:19.535 11:34:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:20:19.535 ************************************ 00:20:19.535 START TEST nvme_fio 00:20:19.535 ************************************ 00:20:19.535 11:34:24 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:20:19.535 11:34:24 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:20:19.535 11:34:24 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:20:19.535 11:34:24 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:20:19.535 11:34:24 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:20:19.535 11:34:24 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:20:19.535 11:34:24 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:19.535 11:34:24 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:19.535 11:34:24 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:20:19.535 11:34:24 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:20:19.535 11:34:24 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:20:19.535 11:34:24 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:20:19.535 11:34:24 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:20:19.535 11:34:24 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:20:19.535 11:34:24 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:19.535 11:34:24 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:20:19.535 11:34:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:20:19.535 11:34:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:20.104 11:34:25 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:20:20.104 11:34:25 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:20:20.104 11:34:25 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:20:20.104 11:34:25 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:20.104 11:34:25 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:20.104 11:34:25 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:20.104 11:34:25 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:20.104 11:34:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:20:20.104 11:34:25 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:20.104 11:34:25 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:20.104 11:34:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:20.104 11:34:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:20:20.104 11:34:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:20.104 11:34:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:20.104 11:34:25 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:20.104 11:34:25 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:20:20.104 11:34:25 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:20.104 11:34:25 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:20:20.104 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:20.104 fio-3.35 00:20:20.104 Starting 1 thread 00:20:24.291 00:20:24.291 test: (groupid=0, jobs=1): err= 0: pid=65753: Wed Nov 20 11:34:29 2024 00:20:24.291 read: IOPS=13.5k, BW=52.7MiB/s (55.2MB/s)(105MiB/2001msec) 00:20:24.291 slat (nsec): min=4843, max=63089, avg=7707.46, stdev=2488.20 00:20:24.291 clat (usec): min=356, max=9836, avg=4732.58, stdev=805.42 00:20:24.291 lat (usec): min=363, max=9845, avg=4740.29, stdev=806.22 00:20:24.291 clat percentiles (usec): 00:20:24.291 | 1.00th=[ 2835], 5.00th=[ 3654], 10.00th=[ 3818], 20.00th=[ 4113], 00:20:24.291 | 30.00th=[ 4490], 40.00th=[ 4621], 50.00th=[ 4752], 60.00th=[ 4817], 00:20:24.291 | 70.00th=[ 4948], 80.00th=[ 5145], 90.00th=[ 5538], 95.00th=[ 5932], 00:20:24.291 | 99.00th=[ 7701], 99.50th=[ 8291], 99.90th=[ 8848], 99.95th=[ 9110], 00:20:24.291 | 99.99th=[ 9634] 00:20:24.291 bw ( KiB/s): min=52016, max=56696, per=100.00%, avg=54520.00, stdev=2357.18, samples=3 00:20:24.291 iops : min=13004, max=14174, avg=13630.67, stdev=589.43, samples=3 00:20:24.291 write: IOPS=13.5k, BW=52.6MiB/s (55.2MB/s)(105MiB/2001msec); 0 zone resets 00:20:24.291 slat (usec): min=4, max=119, avg= 8.01, stdev= 2.65 00:20:24.291 clat (usec): min=317, max=9719, avg=4727.51, stdev=800.68 00:20:24.291 lat (usec): min=325, max=9725, avg=4735.51, stdev=801.53 00:20:24.291 clat percentiles (usec): 00:20:24.291 | 1.00th=[ 2868], 5.00th=[ 3654], 10.00th=[ 3818], 20.00th=[ 4080], 00:20:24.291 | 30.00th=[ 4490], 40.00th=[ 4621], 50.00th=[ 4752], 60.00th=[ 4817], 00:20:24.291 | 70.00th=[ 4948], 80.00th=[ 5145], 90.00th=[ 5538], 95.00th=[ 5932], 00:20:24.291 | 99.00th=[ 7701], 99.50th=[ 8160], 99.90th=[ 8848], 99.95th=[ 9110], 00:20:24.291 | 99.99th=[ 9372] 00:20:24.292 bw ( KiB/s): min=51648, max=57008, per=100.00%, avg=54552.00, stdev=2707.94, samples=3 00:20:24.292 iops : min=12912, max=14252, avg=13638.00, stdev=676.98, samples=3 00:20:24.292 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:20:24.292 lat (msec) : 2=0.14%, 4=17.62%, 10=82.21% 00:20:24.292 cpu : usr=98.90%, sys=0.10%, ctx=3, majf=0, minf=607 00:20:24.292 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:24.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:24.292 issued rwts: total=26987,26965,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:24.292 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:24.292 00:20:24.292 Run status group 0 (all jobs): 00:20:24.292 READ: bw=52.7MiB/s (55.2MB/s), 52.7MiB/s-52.7MiB/s (55.2MB/s-55.2MB/s), io=105MiB (111MB), run=2001-2001msec 00:20:24.292 WRITE: bw=52.6MiB/s (55.2MB/s), 52.6MiB/s-52.6MiB/s (55.2MB/s-55.2MB/s), io=105MiB (110MB), run=2001-2001msec 00:20:24.292 ----------------------------------------------------- 00:20:24.292 Suppressions used: 00:20:24.292 count bytes template 00:20:24.292 1 32 /usr/src/fio/parse.c 00:20:24.292 1 8 libtcmalloc_minimal.so 00:20:24.292 ----------------------------------------------------- 00:20:24.292 00:20:24.292 11:34:29 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:20:24.292 11:34:29 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:20:24.292 11:34:29 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:20:24.292 11:34:29 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:20:24.292 11:34:29 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:20:24.292 11:34:29 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:20:24.549 11:34:30 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:20:24.549 11:34:30 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:20:24.549 11:34:30 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:20:24.549 11:34:30 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:24.549 11:34:30 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:24.549 11:34:30 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:24.549 11:34:30 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:24.549 11:34:30 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:20:24.549 11:34:30 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:24.549 11:34:30 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:24.549 11:34:30 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:24.549 11:34:30 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:20:24.549 11:34:30 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:24.549 11:34:30 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:24.549 11:34:30 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:24.549 11:34:30 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:20:24.549 11:34:30 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:24.549 11:34:30 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:20:24.807 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:24.807 fio-3.35 00:20:24.807 Starting 1 thread 00:20:28.091 00:20:28.091 test: (groupid=0, jobs=1): err= 0: pid=65819: Wed Nov 20 11:34:33 2024 00:20:28.091 read: IOPS=17.1k, BW=66.9MiB/s (70.2MB/s)(134MiB/2001msec) 00:20:28.091 slat (nsec): min=4599, max=48828, avg=6046.62, stdev=1952.66 00:20:28.091 clat (usec): min=274, max=8257, avg=3714.00, stdev=352.55 00:20:28.091 lat (usec): min=281, max=8305, avg=3720.05, stdev=352.89 00:20:28.091 clat percentiles (usec): 00:20:28.091 | 1.00th=[ 3097], 5.00th=[ 3392], 10.00th=[ 3458], 20.00th=[ 3523], 00:20:28.091 | 30.00th=[ 3589], 40.00th=[ 3621], 50.00th=[ 3687], 60.00th=[ 3720], 00:20:28.091 | 70.00th=[ 3785], 80.00th=[ 3851], 90.00th=[ 3949], 95.00th=[ 4047], 00:20:28.091 | 99.00th=[ 5342], 99.50th=[ 5866], 99.90th=[ 7111], 99.95th=[ 7373], 00:20:28.091 | 99.99th=[ 8160] 00:20:28.091 bw ( KiB/s): min=64479, max=71264, per=99.83%, avg=68434.33, stdev=3529.79, samples=3 00:20:28.091 iops : min=16119, max=17816, avg=17108.33, stdev=882.87, samples=3 00:20:28.091 write: IOPS=17.2k, BW=67.0MiB/s (70.3MB/s)(134MiB/2001msec); 0 zone resets 00:20:28.091 slat (nsec): min=4740, max=55639, avg=6168.04, stdev=2055.59 00:20:28.091 clat (usec): min=321, max=8193, avg=3724.36, stdev=352.99 00:20:28.091 lat (usec): min=327, max=8207, avg=3730.52, stdev=353.32 00:20:28.091 clat percentiles (usec): 00:20:28.091 | 1.00th=[ 3130], 5.00th=[ 3392], 10.00th=[ 3490], 20.00th=[ 3556], 00:20:28.091 | 30.00th=[ 3589], 40.00th=[ 3654], 50.00th=[ 3687], 60.00th=[ 3720], 00:20:28.091 | 70.00th=[ 3785], 80.00th=[ 3851], 90.00th=[ 3949], 95.00th=[ 4047], 00:20:28.091 | 99.00th=[ 5342], 99.50th=[ 6063], 99.90th=[ 7177], 99.95th=[ 7504], 00:20:28.091 | 99.99th=[ 8029] 00:20:28.091 bw ( KiB/s): min=64894, max=70720, per=99.45%, avg=68250.00, stdev=3012.36, samples=3 00:20:28.091 iops : min=16223, max=17680, avg=17062.33, stdev=753.37, samples=3 00:20:28.091 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:20:28.091 lat (msec) : 2=0.10%, 4=92.69%, 10=7.17% 00:20:28.091 cpu : usr=99.00%, sys=0.10%, ctx=2, majf=0, minf=608 00:20:28.091 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:28.091 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.091 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:28.091 issued rwts: total=34291,34331,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.091 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:28.091 00:20:28.091 Run status group 0 (all jobs): 00:20:28.091 READ: bw=66.9MiB/s (70.2MB/s), 66.9MiB/s-66.9MiB/s (70.2MB/s-70.2MB/s), io=134MiB (140MB), run=2001-2001msec 00:20:28.091 WRITE: bw=67.0MiB/s (70.3MB/s), 67.0MiB/s-67.0MiB/s (70.3MB/s-70.3MB/s), io=134MiB (141MB), run=2001-2001msec 00:20:28.349 ----------------------------------------------------- 00:20:28.349 Suppressions used: 00:20:28.349 count bytes template 00:20:28.349 1 32 /usr/src/fio/parse.c 00:20:28.349 1 8 libtcmalloc_minimal.so 00:20:28.349 ----------------------------------------------------- 00:20:28.349 00:20:28.349 11:34:34 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:20:28.349 11:34:34 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:20:28.349 11:34:34 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:20:28.349 11:34:34 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:20:28.916 11:34:34 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:20:28.916 11:34:34 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:20:29.174 11:34:34 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:20:29.174 11:34:34 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:20:29.174 11:34:34 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:20:29.174 11:34:34 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:29.174 11:34:34 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:29.174 11:34:34 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:29.174 11:34:34 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:29.174 11:34:34 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:20:29.174 11:34:34 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:29.174 11:34:34 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:29.174 11:34:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:29.174 11:34:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:20:29.174 11:34:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:29.174 11:34:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:29.174 11:34:34 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:29.174 11:34:34 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:20:29.174 11:34:34 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:29.174 11:34:34 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:20:29.174 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:29.174 fio-3.35 00:20:29.174 Starting 1 thread 00:20:33.389 00:20:33.389 test: (groupid=0, jobs=1): err= 0: pid=65885: Wed Nov 20 11:34:38 2024 00:20:33.389 read: IOPS=16.7k, BW=65.1MiB/s (68.3MB/s)(130MiB/2001msec) 00:20:33.389 slat (nsec): min=4629, max=46838, avg=6223.22, stdev=1901.60 00:20:33.389 clat (usec): min=348, max=8210, avg=3814.95, stdev=396.07 00:20:33.389 lat (usec): min=354, max=8255, avg=3821.17, stdev=396.65 00:20:33.389 clat percentiles (usec): 00:20:33.389 | 1.00th=[ 3064], 5.00th=[ 3392], 10.00th=[ 3490], 20.00th=[ 3556], 00:20:33.389 | 30.00th=[ 3621], 40.00th=[ 3654], 50.00th=[ 3720], 60.00th=[ 3752], 00:20:33.389 | 70.00th=[ 3851], 80.00th=[ 4228], 90.00th=[ 4359], 95.00th=[ 4490], 00:20:33.389 | 99.00th=[ 4817], 99.50th=[ 5211], 99.90th=[ 6259], 99.95th=[ 6915], 00:20:33.389 | 99.99th=[ 8094] 00:20:33.389 bw ( KiB/s): min=62888, max=68872, per=98.40%, avg=65597.33, stdev=3031.79, samples=3 00:20:33.389 iops : min=15722, max=17218, avg=16399.33, stdev=757.95, samples=3 00:20:33.389 write: IOPS=16.7k, BW=65.2MiB/s (68.4MB/s)(131MiB/2001msec); 0 zone resets 00:20:33.389 slat (nsec): min=4740, max=37673, avg=6324.36, stdev=1874.78 00:20:33.389 clat (usec): min=240, max=8132, avg=3827.23, stdev=403.37 00:20:33.389 lat (usec): min=246, max=8149, avg=3833.55, stdev=403.92 00:20:33.389 clat percentiles (usec): 00:20:33.389 | 1.00th=[ 3064], 5.00th=[ 3392], 10.00th=[ 3490], 20.00th=[ 3556], 00:20:33.389 | 30.00th=[ 3621], 40.00th=[ 3654], 50.00th=[ 3720], 60.00th=[ 3785], 00:20:33.389 | 70.00th=[ 3884], 80.00th=[ 4228], 90.00th=[ 4424], 95.00th=[ 4490], 00:20:33.390 | 99.00th=[ 4883], 99.50th=[ 5276], 99.90th=[ 6259], 99.95th=[ 7046], 00:20:33.390 | 99.99th=[ 7963] 00:20:33.390 bw ( KiB/s): min=63216, max=68696, per=97.94%, avg=65426.67, stdev=2889.32, samples=3 00:20:33.390 iops : min=15804, max=17174, avg=16356.67, stdev=722.33, samples=3 00:20:33.390 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:20:33.390 lat (msec) : 2=0.05%, 4=75.08%, 10=24.83% 00:20:33.390 cpu : usr=98.95%, sys=0.10%, ctx=4, majf=0, minf=607 00:20:33.390 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:33.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:33.390 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:33.390 issued rwts: total=33350,33419,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:33.390 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:33.390 00:20:33.390 Run status group 0 (all jobs): 00:20:33.390 READ: bw=65.1MiB/s (68.3MB/s), 65.1MiB/s-65.1MiB/s (68.3MB/s-68.3MB/s), io=130MiB (137MB), run=2001-2001msec 00:20:33.390 WRITE: bw=65.2MiB/s (68.4MB/s), 65.2MiB/s-65.2MiB/s (68.4MB/s-68.4MB/s), io=131MiB (137MB), run=2001-2001msec 00:20:33.390 ----------------------------------------------------- 00:20:33.390 Suppressions used: 00:20:33.390 count bytes template 00:20:33.390 1 32 /usr/src/fio/parse.c 00:20:33.390 1 8 libtcmalloc_minimal.so 00:20:33.390 ----------------------------------------------------- 00:20:33.390 00:20:33.390 11:34:38 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:20:33.390 11:34:38 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:20:33.390 11:34:38 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:20:33.390 11:34:38 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:20:33.390 11:34:38 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:20:33.390 11:34:38 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:20:33.659 11:34:39 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:20:33.659 11:34:39 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:20:33.659 11:34:39 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:20:33.659 11:34:39 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:33.659 11:34:39 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:33.659 11:34:39 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:33.659 11:34:39 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:33.659 11:34:39 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:20:33.659 11:34:39 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:33.659 11:34:39 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:33.659 11:34:39 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:33.659 11:34:39 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:20:33.659 11:34:39 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:33.659 11:34:39 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:33.659 11:34:39 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:33.659 11:34:39 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:20:33.659 11:34:39 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:33.659 11:34:39 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:20:33.659 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:33.659 fio-3.35 00:20:33.659 Starting 1 thread 00:20:37.867 00:20:37.867 test: (groupid=0, jobs=1): err= 0: pid=65951: Wed Nov 20 11:34:42 2024 00:20:37.867 read: IOPS=13.7k, BW=53.4MiB/s (55.9MB/s)(107MiB/2001msec) 00:20:37.867 slat (nsec): min=4742, max=48290, avg=7606.11, stdev=2612.95 00:20:37.867 clat (usec): min=675, max=9017, avg=4667.99, stdev=938.94 00:20:37.867 lat (usec): min=693, max=9065, avg=4675.59, stdev=940.07 00:20:37.867 clat percentiles (usec): 00:20:37.867 | 1.00th=[ 3195], 5.00th=[ 3490], 10.00th=[ 3621], 20.00th=[ 4146], 00:20:37.867 | 30.00th=[ 4293], 40.00th=[ 4359], 50.00th=[ 4490], 60.00th=[ 4555], 00:20:37.867 | 70.00th=[ 4686], 80.00th=[ 5014], 90.00th=[ 6259], 95.00th=[ 6915], 00:20:37.867 | 99.00th=[ 7439], 99.50th=[ 7570], 99.90th=[ 7963], 99.95th=[ 8029], 00:20:37.867 | 99.99th=[ 8979] 00:20:37.867 bw ( KiB/s): min=50224, max=60536, per=98.68%, avg=53914.67, stdev=5746.82, samples=3 00:20:37.867 iops : min=12556, max=15134, avg=13478.67, stdev=1436.71, samples=3 00:20:37.867 write: IOPS=13.6k, BW=53.3MiB/s (55.9MB/s)(107MiB/2001msec); 0 zone resets 00:20:37.867 slat (nsec): min=4802, max=53094, avg=7777.74, stdev=2696.25 00:20:37.867 clat (usec): min=536, max=8924, avg=4675.86, stdev=939.79 00:20:37.867 lat (usec): min=547, max=8953, avg=4683.63, stdev=940.96 00:20:37.867 clat percentiles (usec): 00:20:37.867 | 1.00th=[ 3163], 5.00th=[ 3523], 10.00th=[ 3654], 20.00th=[ 4146], 00:20:37.868 | 30.00th=[ 4293], 40.00th=[ 4359], 50.00th=[ 4490], 60.00th=[ 4555], 00:20:37.868 | 70.00th=[ 4686], 80.00th=[ 5014], 90.00th=[ 6259], 95.00th=[ 6915], 00:20:37.868 | 99.00th=[ 7439], 99.50th=[ 7504], 99.90th=[ 7832], 99.95th=[ 8029], 00:20:37.868 | 99.99th=[ 8848] 00:20:37.868 bw ( KiB/s): min=50480, max=60704, per=98.82%, avg=53936.00, stdev=5861.70, samples=3 00:20:37.868 iops : min=12620, max=15176, avg=13484.00, stdev=1465.43, samples=3 00:20:37.868 lat (usec) : 750=0.01%, 1000=0.01% 00:20:37.868 lat (msec) : 2=0.08%, 4=15.88%, 10=84.02% 00:20:37.868 cpu : usr=98.60%, sys=0.30%, ctx=4, majf=0, minf=606 00:20:37.868 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:37.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:37.868 issued rwts: total=27330,27303,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.868 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:37.868 00:20:37.868 Run status group 0 (all jobs): 00:20:37.868 READ: bw=53.4MiB/s (55.9MB/s), 53.4MiB/s-53.4MiB/s (55.9MB/s-55.9MB/s), io=107MiB (112MB), run=2001-2001msec 00:20:37.868 WRITE: bw=53.3MiB/s (55.9MB/s), 53.3MiB/s-53.3MiB/s (55.9MB/s-55.9MB/s), io=107MiB (112MB), run=2001-2001msec 00:20:37.868 ----------------------------------------------------- 00:20:37.868 Suppressions used: 00:20:37.868 count bytes template 00:20:37.868 1 32 /usr/src/fio/parse.c 00:20:37.868 1 8 libtcmalloc_minimal.so 00:20:37.868 ----------------------------------------------------- 00:20:37.868 00:20:37.868 11:34:43 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:20:37.868 11:34:43 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:20:37.868 00:20:37.868 real 0m18.385s 00:20:37.868 user 0m14.505s 00:20:37.868 sys 0m2.928s 00:20:37.868 11:34:43 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:37.868 ************************************ 00:20:37.868 END TEST nvme_fio 00:20:37.868 ************************************ 00:20:37.868 11:34:43 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:20:37.868 ************************************ 00:20:37.868 END TEST nvme 00:20:37.868 ************************************ 00:20:37.868 00:20:37.868 real 1m33.983s 00:20:37.868 user 3m49.593s 00:20:37.868 sys 0m16.218s 00:20:37.868 11:34:43 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:37.868 11:34:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:20:37.868 11:34:43 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:20:37.868 11:34:43 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:20:37.868 11:34:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:37.868 11:34:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:37.868 11:34:43 -- common/autotest_common.sh@10 -- # set +x 00:20:37.868 ************************************ 00:20:37.868 START TEST nvme_scc 00:20:37.868 ************************************ 00:20:37.868 11:34:43 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:20:37.868 * Looking for test storage... 00:20:37.868 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:20:37.868 11:34:43 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:37.868 11:34:43 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:20:37.868 11:34:43 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:37.868 11:34:43 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:37.868 11:34:43 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:37.868 11:34:43 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:37.868 11:34:43 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:37.868 11:34:43 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:20:37.868 11:34:43 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:20:37.868 11:34:43 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:20:37.868 11:34:43 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:20:37.868 11:34:43 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:20:37.868 11:34:43 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:20:37.868 11:34:43 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:20:37.868 11:34:43 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:37.868 11:34:43 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:20:37.868 11:34:43 nvme_scc -- scripts/common.sh@345 -- # : 1 00:20:37.868 11:34:43 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:37.868 11:34:43 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:37.868 11:34:43 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:20:37.868 11:34:43 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:20:37.868 11:34:43 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:37.868 11:34:43 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:20:37.868 11:34:43 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:20:37.868 11:34:43 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:20:37.868 11:34:43 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:20:37.868 11:34:43 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:37.868 11:34:43 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:20:37.868 11:34:43 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:20:37.868 11:34:43 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:37.868 11:34:43 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:37.868 11:34:43 nvme_scc -- scripts/common.sh@368 -- # return 0 00:20:37.868 11:34:43 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:37.868 11:34:43 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:37.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.868 --rc genhtml_branch_coverage=1 00:20:37.868 --rc genhtml_function_coverage=1 00:20:37.868 --rc genhtml_legend=1 00:20:37.868 --rc geninfo_all_blocks=1 00:20:37.868 --rc geninfo_unexecuted_blocks=1 00:20:37.868 00:20:37.868 ' 00:20:37.868 11:34:43 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:37.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.868 --rc genhtml_branch_coverage=1 00:20:37.868 --rc genhtml_function_coverage=1 00:20:37.868 --rc genhtml_legend=1 00:20:37.868 --rc geninfo_all_blocks=1 00:20:37.868 --rc geninfo_unexecuted_blocks=1 00:20:37.868 00:20:37.868 ' 00:20:37.868 11:34:43 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:37.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.868 --rc genhtml_branch_coverage=1 00:20:37.868 --rc genhtml_function_coverage=1 00:20:37.868 --rc genhtml_legend=1 00:20:37.868 --rc geninfo_all_blocks=1 00:20:37.868 --rc geninfo_unexecuted_blocks=1 00:20:37.868 00:20:37.868 ' 00:20:37.868 11:34:43 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:37.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.868 --rc genhtml_branch_coverage=1 00:20:37.868 --rc genhtml_function_coverage=1 00:20:37.868 --rc genhtml_legend=1 00:20:37.868 --rc geninfo_all_blocks=1 00:20:37.868 --rc geninfo_unexecuted_blocks=1 00:20:37.868 00:20:37.868 ' 00:20:37.868 11:34:43 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:20:37.868 11:34:43 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:20:37.868 11:34:43 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:20:37.868 11:34:43 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:37.868 11:34:43 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:37.868 11:34:43 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:20:37.868 11:34:43 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:37.868 11:34:43 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:37.868 11:34:43 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:37.868 11:34:43 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.868 11:34:43 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.868 11:34:43 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.868 11:34:43 nvme_scc -- paths/export.sh@5 -- # export PATH 00:20:37.869 11:34:43 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.869 11:34:43 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:20:37.869 11:34:43 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:20:37.869 11:34:43 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:20:37.869 11:34:43 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:20:37.869 11:34:43 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:20:37.869 11:34:43 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:20:37.869 11:34:43 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:20:37.869 11:34:43 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:20:37.869 11:34:43 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:20:37.869 11:34:43 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:37.869 11:34:43 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:20:37.869 11:34:43 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:20:37.869 11:34:43 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:20:37.869 11:34:43 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:38.126 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:38.383 Waiting for block devices as requested 00:20:38.383 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:38.383 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:38.640 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:20:38.640 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:20:43.915 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:20:43.915 11:34:49 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:20:43.915 11:34:49 nvme_scc -- scripts/common.sh@18 -- # local i 00:20:43.915 11:34:49 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:20:43.915 11:34:49 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:20:43.915 11:34:49 nvme_scc -- scripts/common.sh@27 -- # return 0 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.915 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.916 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.917 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:20:43.918 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.919 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.920 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:20:43.921 11:34:49 nvme_scc -- scripts/common.sh@18 -- # local i 00:20:43.921 11:34:49 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:20:43.921 11:34:49 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:20:43.921 11:34:49 nvme_scc -- scripts/common.sh@27 -- # return 0 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.921 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:20:43.922 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:20:43.923 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.924 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:43.925 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:20:44.190 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:20:44.190 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.190 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.190 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:44.190 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:20:44.190 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:20:44.190 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.190 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.190 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.190 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:20:44.190 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:20:44.190 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.190 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.190 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.190 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:20:44.190 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:20:44.190 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.190 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.190 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.190 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:20:44.190 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:20:44.190 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.190 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.190 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.190 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:20:44.190 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:20:44.190 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.190 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.190 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:44.190 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:20:44.190 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:20:44.190 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.190 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:44.191 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:20:44.192 11:34:49 nvme_scc -- scripts/common.sh@18 -- # local i 00:20:44.192 11:34:49 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:20:44.192 11:34:49 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:20:44.192 11:34:49 nvme_scc -- scripts/common.sh@27 -- # return 0 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:20:44.192 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:20:44.193 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.194 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.195 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:20:44.196 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:44.197 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:44.198 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.199 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.200 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:20:44.201 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:20:44.202 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:44.203 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.204 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.466 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:20:44.467 11:34:49 nvme_scc -- scripts/common.sh@18 -- # local i 00:20:44.467 11:34:49 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:20:44.467 11:34:49 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:20:44.467 11:34:49 nvme_scc -- scripts/common.sh@27 -- # return 0 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:20:44.467 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:20:44.468 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.469 11:34:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:20:44.469 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:20:44.470 11:34:50 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:20:44.470 11:34:50 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:20:44.470 11:34:50 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:20:44.470 11:34:50 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:20:44.470 11:34:50 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:45.037 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:45.606 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:45.606 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:45.606 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:20:45.606 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:20:45.606 11:34:51 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:20:45.606 11:34:51 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:45.606 11:34:51 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:45.606 11:34:51 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:20:45.606 ************************************ 00:20:45.606 START TEST nvme_simple_copy 00:20:45.606 ************************************ 00:20:45.606 11:34:51 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:20:46.175 Initializing NVMe Controllers 00:20:46.175 Attaching to 0000:00:10.0 00:20:46.175 Controller supports SCC. Attached to 0000:00:10.0 00:20:46.175 Namespace ID: 1 size: 6GB 00:20:46.175 Initialization complete. 00:20:46.175 00:20:46.175 Controller QEMU NVMe Ctrl (12340 ) 00:20:46.175 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:20:46.175 Namespace Block Size:4096 00:20:46.175 Writing LBAs 0 to 63 with Random Data 00:20:46.175 Copied LBAs from 0 - 63 to the Destination LBA 256 00:20:46.175 LBAs matching Written Data: 64 00:20:46.175 00:20:46.175 real 0m0.336s 00:20:46.175 user 0m0.128s 00:20:46.175 sys 0m0.107s 00:20:46.175 11:34:51 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:46.175 ************************************ 00:20:46.175 END TEST nvme_simple_copy 00:20:46.175 11:34:51 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:20:46.175 ************************************ 00:20:46.175 00:20:46.175 real 0m8.347s 00:20:46.175 user 0m1.477s 00:20:46.175 sys 0m1.738s 00:20:46.175 11:34:51 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:46.175 11:34:51 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:20:46.175 ************************************ 00:20:46.175 END TEST nvme_scc 00:20:46.175 ************************************ 00:20:46.175 11:34:51 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:20:46.175 11:34:51 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:20:46.175 11:34:51 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:20:46.175 11:34:51 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:20:46.175 11:34:51 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:20:46.175 11:34:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:46.175 11:34:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:46.175 11:34:51 -- common/autotest_common.sh@10 -- # set +x 00:20:46.175 ************************************ 00:20:46.175 START TEST nvme_fdp 00:20:46.175 ************************************ 00:20:46.175 11:34:51 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:20:46.175 * Looking for test storage... 00:20:46.175 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:20:46.175 11:34:51 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:46.175 11:34:51 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:20:46.175 11:34:51 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:46.175 11:34:51 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:46.175 11:34:51 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:46.175 11:34:51 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:46.175 11:34:51 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:46.175 11:34:51 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:20:46.175 11:34:51 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:20:46.175 11:34:51 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:20:46.175 11:34:51 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:20:46.175 11:34:51 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:20:46.175 11:34:51 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:20:46.175 11:34:51 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:20:46.175 11:34:51 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:46.175 11:34:51 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:20:46.175 11:34:51 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:20:46.175 11:34:51 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:46.175 11:34:51 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:46.175 11:34:51 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:20:46.175 11:34:51 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:20:46.175 11:34:51 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:46.175 11:34:51 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:20:46.175 11:34:51 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:20:46.175 11:34:51 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:20:46.175 11:34:51 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:20:46.175 11:34:51 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:46.175 11:34:51 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:20:46.175 11:34:51 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:20:46.175 11:34:51 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:46.175 11:34:51 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:46.175 11:34:51 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:20:46.175 11:34:51 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:46.176 11:34:51 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:46.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.176 --rc genhtml_branch_coverage=1 00:20:46.176 --rc genhtml_function_coverage=1 00:20:46.176 --rc genhtml_legend=1 00:20:46.176 --rc geninfo_all_blocks=1 00:20:46.176 --rc geninfo_unexecuted_blocks=1 00:20:46.176 00:20:46.176 ' 00:20:46.176 11:34:51 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:46.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.176 --rc genhtml_branch_coverage=1 00:20:46.176 --rc genhtml_function_coverage=1 00:20:46.176 --rc genhtml_legend=1 00:20:46.176 --rc geninfo_all_blocks=1 00:20:46.176 --rc geninfo_unexecuted_blocks=1 00:20:46.176 00:20:46.176 ' 00:20:46.176 11:34:51 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:46.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.176 --rc genhtml_branch_coverage=1 00:20:46.176 --rc genhtml_function_coverage=1 00:20:46.176 --rc genhtml_legend=1 00:20:46.176 --rc geninfo_all_blocks=1 00:20:46.176 --rc geninfo_unexecuted_blocks=1 00:20:46.176 00:20:46.176 ' 00:20:46.176 11:34:51 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:46.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.176 --rc genhtml_branch_coverage=1 00:20:46.176 --rc genhtml_function_coverage=1 00:20:46.176 --rc genhtml_legend=1 00:20:46.176 --rc geninfo_all_blocks=1 00:20:46.176 --rc geninfo_unexecuted_blocks=1 00:20:46.176 00:20:46.176 ' 00:20:46.176 11:34:51 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:20:46.176 11:34:51 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:20:46.176 11:34:51 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:20:46.444 11:34:51 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:46.444 11:34:51 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:46.444 11:34:51 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:20:46.444 11:34:51 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:46.444 11:34:51 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:46.444 11:34:51 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:46.444 11:34:51 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.444 11:34:51 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.444 11:34:51 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.444 11:34:51 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:20:46.444 11:34:51 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.444 11:34:51 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:20:46.444 11:34:51 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:20:46.444 11:34:51 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:20:46.444 11:34:51 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:20:46.444 11:34:51 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:20:46.444 11:34:51 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:20:46.444 11:34:51 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:20:46.444 11:34:51 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:20:46.444 11:34:51 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:20:46.444 11:34:51 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:46.444 11:34:51 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:46.704 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:46.704 Waiting for block devices as requested 00:20:47.024 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:47.024 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:47.024 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:20:47.024 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:20:52.310 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:20:52.310 11:34:57 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:20:52.310 11:34:57 nvme_fdp -- scripts/common.sh@18 -- # local i 00:20:52.310 11:34:57 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:20:52.310 11:34:57 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:20:52.310 11:34:57 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.310 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:20:52.311 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:20:52.312 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.313 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.314 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.315 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:20:52.316 11:34:57 nvme_fdp -- scripts/common.sh@18 -- # local i 00:20:52.316 11:34:57 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:20:52.316 11:34:57 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:20:52.316 11:34:57 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:52.316 11:34:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.317 11:34:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:20:52.317 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.318 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:20:52.319 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:20:52.582 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:52.583 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.584 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:52.585 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:20:52.586 11:34:58 nvme_fdp -- scripts/common.sh@18 -- # local i 00:20:52.586 11:34:58 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:20:52.586 11:34:58 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:20:52.586 11:34:58 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.586 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.587 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.588 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:20:52.589 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:20:52.590 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.591 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.592 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:52.593 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:20:52.877 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:20:52.877 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:52.877 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:20:52.877 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:20:52.877 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:52.877 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:20:52.877 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:20:52.877 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:52.877 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:20:52.877 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:20:52.877 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.877 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:20:52.877 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:20:52.877 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.877 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.877 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.877 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:20:52.877 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:20:52.878 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:20:52.879 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.880 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:20:52.881 11:34:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:20:52.882 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:20:52.883 11:34:58 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.884 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:20:52.885 11:34:58 nvme_fdp -- scripts/common.sh@18 -- # local i 00:20:52.885 11:34:58 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:20:52.885 11:34:58 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:20:52.885 11:34:58 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:20:52.885 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:20:52.886 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:20:52.887 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:20:52.888 11:34:58 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:20:52.888 11:34:58 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:20:52.889 11:34:58 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:20:53.155 11:34:58 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:20:53.155 11:34:58 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:20:53.155 11:34:58 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:20:53.155 11:34:58 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:20:53.155 11:34:58 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:20:53.156 11:34:58 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:53.415 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:53.980 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:53.980 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:20:53.980 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:53.980 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:20:54.237 11:34:59 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:20:54.237 11:34:59 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:54.237 11:34:59 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:54.237 11:34:59 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:20:54.237 ************************************ 00:20:54.237 START TEST nvme_flexible_data_placement 00:20:54.237 ************************************ 00:20:54.237 11:34:59 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:20:54.495 Initializing NVMe Controllers 00:20:54.495 Attaching to 0000:00:13.0 00:20:54.495 Controller supports FDP Attached to 0000:00:13.0 00:20:54.495 Namespace ID: 1 Endurance Group ID: 1 00:20:54.495 Initialization complete. 00:20:54.495 00:20:54.495 ================================== 00:20:54.495 == FDP tests for Namespace: #01 == 00:20:54.495 ================================== 00:20:54.495 00:20:54.495 Get Feature: FDP: 00:20:54.495 ================= 00:20:54.495 Enabled: Yes 00:20:54.495 FDP configuration Index: 0 00:20:54.495 00:20:54.495 FDP configurations log page 00:20:54.495 =========================== 00:20:54.495 Number of FDP configurations: 1 00:20:54.495 Version: 0 00:20:54.495 Size: 112 00:20:54.495 FDP Configuration Descriptor: 0 00:20:54.495 Descriptor Size: 96 00:20:54.495 Reclaim Group Identifier format: 2 00:20:54.495 FDP Volatile Write Cache: Not Present 00:20:54.495 FDP Configuration: Valid 00:20:54.495 Vendor Specific Size: 0 00:20:54.495 Number of Reclaim Groups: 2 00:20:54.495 Number of Recalim Unit Handles: 8 00:20:54.495 Max Placement Identifiers: 128 00:20:54.495 Number of Namespaces Suppprted: 256 00:20:54.495 Reclaim unit Nominal Size: 6000000 bytes 00:20:54.495 Estimated Reclaim Unit Time Limit: Not Reported 00:20:54.495 RUH Desc #000: RUH Type: Initially Isolated 00:20:54.495 RUH Desc #001: RUH Type: Initially Isolated 00:20:54.495 RUH Desc #002: RUH Type: Initially Isolated 00:20:54.495 RUH Desc #003: RUH Type: Initially Isolated 00:20:54.495 RUH Desc #004: RUH Type: Initially Isolated 00:20:54.495 RUH Desc #005: RUH Type: Initially Isolated 00:20:54.495 RUH Desc #006: RUH Type: Initially Isolated 00:20:54.495 RUH Desc #007: RUH Type: Initially Isolated 00:20:54.495 00:20:54.495 FDP reclaim unit handle usage log page 00:20:54.495 ====================================== 00:20:54.495 Number of Reclaim Unit Handles: 8 00:20:54.495 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:20:54.495 RUH Usage Desc #001: RUH Attributes: Unused 00:20:54.495 RUH Usage Desc #002: RUH Attributes: Unused 00:20:54.495 RUH Usage Desc #003: RUH Attributes: Unused 00:20:54.495 RUH Usage Desc #004: RUH Attributes: Unused 00:20:54.495 RUH Usage Desc #005: RUH Attributes: Unused 00:20:54.495 RUH Usage Desc #006: RUH Attributes: Unused 00:20:54.495 RUH Usage Desc #007: RUH Attributes: Unused 00:20:54.495 00:20:54.495 FDP statistics log page 00:20:54.495 ======================= 00:20:54.495 Host bytes with metadata written: 815800320 00:20:54.495 Media bytes with metadata written: 815902720 00:20:54.495 Media bytes erased: 0 00:20:54.495 00:20:54.495 FDP Reclaim unit handle status 00:20:54.495 ============================== 00:20:54.495 Number of RUHS descriptors: 2 00:20:54.495 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000055fe 00:20:54.495 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:20:54.495 00:20:54.495 FDP write on placement id: 0 success 00:20:54.495 00:20:54.495 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:20:54.495 00:20:54.495 IO mgmt send: RUH update for Placement ID: #0 Success 00:20:54.495 00:20:54.495 Get Feature: FDP Events for Placement handle: #0 00:20:54.495 ======================== 00:20:54.495 Number of FDP Events: 6 00:20:54.495 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:20:54.495 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:20:54.495 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:20:54.495 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:20:54.495 FDP Event: #4 Type: Media Reallocated Enabled: No 00:20:54.495 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:20:54.495 00:20:54.495 FDP events log page 00:20:54.495 =================== 00:20:54.495 Number of FDP events: 1 00:20:54.495 FDP Event #0: 00:20:54.495 Event Type: RU Not Written to Capacity 00:20:54.495 Placement Identifier: Valid 00:20:54.495 NSID: Valid 00:20:54.495 Location: Valid 00:20:54.495 Placement Identifier: 0 00:20:54.495 Event Timestamp: 8 00:20:54.495 Namespace Identifier: 1 00:20:54.495 Reclaim Group Identifier: 0 00:20:54.495 Reclaim Unit Handle Identifier: 0 00:20:54.495 00:20:54.495 FDP test passed 00:20:54.495 00:20:54.495 real 0m0.302s 00:20:54.495 user 0m0.117s 00:20:54.495 sys 0m0.084s 00:20:54.495 ************************************ 00:20:54.495 END TEST nvme_flexible_data_placement 00:20:54.495 ************************************ 00:20:54.495 11:35:00 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:54.495 11:35:00 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:20:54.495 ************************************ 00:20:54.495 END TEST nvme_fdp 00:20:54.495 ************************************ 00:20:54.495 00:20:54.495 real 0m8.440s 00:20:54.495 user 0m1.545s 00:20:54.495 sys 0m1.742s 00:20:54.495 11:35:00 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:54.495 11:35:00 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:20:54.495 11:35:00 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:20:54.495 11:35:00 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:20:54.495 11:35:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:54.495 11:35:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:54.495 11:35:00 -- common/autotest_common.sh@10 -- # set +x 00:20:54.495 ************************************ 00:20:54.495 START TEST nvme_rpc 00:20:54.495 ************************************ 00:20:54.495 11:35:00 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:20:54.754 * Looking for test storage... 00:20:54.754 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:20:54.754 11:35:00 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:54.754 11:35:00 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:20:54.754 11:35:00 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:54.754 11:35:00 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:54.754 11:35:00 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:54.754 11:35:00 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:54.754 11:35:00 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:54.754 11:35:00 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:20:54.754 11:35:00 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:20:54.754 11:35:00 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:20:54.754 11:35:00 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:20:54.754 11:35:00 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:20:54.754 11:35:00 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:20:54.754 11:35:00 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:20:54.754 11:35:00 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:54.754 11:35:00 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:20:54.754 11:35:00 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:20:54.754 11:35:00 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:54.754 11:35:00 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:54.754 11:35:00 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:20:54.754 11:35:00 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:20:54.754 11:35:00 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:54.754 11:35:00 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:20:54.754 11:35:00 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:20:54.754 11:35:00 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:20:54.754 11:35:00 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:20:54.754 11:35:00 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:54.754 11:35:00 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:20:54.754 11:35:00 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:20:54.754 11:35:00 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:54.754 11:35:00 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:54.754 11:35:00 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:20:54.754 11:35:00 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:54.754 11:35:00 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:54.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.754 --rc genhtml_branch_coverage=1 00:20:54.754 --rc genhtml_function_coverage=1 00:20:54.754 --rc genhtml_legend=1 00:20:54.754 --rc geninfo_all_blocks=1 00:20:54.754 --rc geninfo_unexecuted_blocks=1 00:20:54.754 00:20:54.754 ' 00:20:54.754 11:35:00 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:54.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.754 --rc genhtml_branch_coverage=1 00:20:54.754 --rc genhtml_function_coverage=1 00:20:54.754 --rc genhtml_legend=1 00:20:54.754 --rc geninfo_all_blocks=1 00:20:54.754 --rc geninfo_unexecuted_blocks=1 00:20:54.754 00:20:54.754 ' 00:20:54.754 11:35:00 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:54.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.754 --rc genhtml_branch_coverage=1 00:20:54.754 --rc genhtml_function_coverage=1 00:20:54.754 --rc genhtml_legend=1 00:20:54.754 --rc geninfo_all_blocks=1 00:20:54.754 --rc geninfo_unexecuted_blocks=1 00:20:54.754 00:20:54.754 ' 00:20:54.754 11:35:00 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:54.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.755 --rc genhtml_branch_coverage=1 00:20:54.755 --rc genhtml_function_coverage=1 00:20:54.755 --rc genhtml_legend=1 00:20:54.755 --rc geninfo_all_blocks=1 00:20:54.755 --rc geninfo_unexecuted_blocks=1 00:20:54.755 00:20:54.755 ' 00:20:54.755 11:35:00 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:54.755 11:35:00 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:20:54.755 11:35:00 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:20:54.755 11:35:00 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:20:54.755 11:35:00 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:20:54.755 11:35:00 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:20:54.755 11:35:00 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:20:54.755 11:35:00 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:20:54.755 11:35:00 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:54.755 11:35:00 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:54.755 11:35:00 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:20:54.755 11:35:00 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:20:54.755 11:35:00 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:20:54.755 11:35:00 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:20:54.755 11:35:00 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:20:54.755 11:35:00 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67341 00:20:54.755 11:35:00 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:20:54.755 11:35:00 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:20:54.755 11:35:00 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67341 00:20:54.755 11:35:00 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67341 ']' 00:20:54.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:54.755 11:35:00 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:54.755 11:35:00 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:54.755 11:35:00 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:54.755 11:35:00 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:54.755 11:35:00 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:55.013 [2024-11-20 11:35:00.637346] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:20:55.013 [2024-11-20 11:35:00.637559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67341 ] 00:20:55.271 [2024-11-20 11:35:00.830826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:55.271 [2024-11-20 11:35:00.996012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.271 [2024-11-20 11:35:00.996019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.203 11:35:01 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:56.203 11:35:01 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:20:56.203 11:35:01 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:20:56.768 Nvme0n1 00:20:56.768 11:35:02 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:20:56.768 11:35:02 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:20:57.025 request: 00:20:57.025 { 00:20:57.025 "bdev_name": "Nvme0n1", 00:20:57.026 "filename": "non_existing_file", 00:20:57.026 "method": "bdev_nvme_apply_firmware", 00:20:57.026 "req_id": 1 00:20:57.026 } 00:20:57.026 Got JSON-RPC error response 00:20:57.026 response: 00:20:57.026 { 00:20:57.026 "code": -32603, 00:20:57.026 "message": "open file failed." 00:20:57.026 } 00:20:57.026 11:35:02 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:20:57.026 11:35:02 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:20:57.026 11:35:02 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:20:57.283 11:35:03 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:20:57.283 11:35:03 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67341 00:20:57.283 11:35:03 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67341 ']' 00:20:57.283 11:35:03 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67341 00:20:57.283 11:35:03 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:20:57.283 11:35:03 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:57.283 11:35:03 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67341 00:20:57.541 killing process with pid 67341 00:20:57.541 11:35:03 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:57.541 11:35:03 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:57.541 11:35:03 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67341' 00:20:57.541 11:35:03 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67341 00:20:57.541 11:35:03 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67341 00:21:00.074 00:21:00.074 real 0m5.156s 00:21:00.074 user 0m10.019s 00:21:00.074 sys 0m0.771s 00:21:00.074 11:35:05 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:00.074 ************************************ 00:21:00.074 END TEST nvme_rpc 00:21:00.074 11:35:05 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:00.074 ************************************ 00:21:00.074 11:35:05 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:21:00.074 11:35:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:00.074 11:35:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:00.074 11:35:05 -- common/autotest_common.sh@10 -- # set +x 00:21:00.074 ************************************ 00:21:00.074 START TEST nvme_rpc_timeouts 00:21:00.074 ************************************ 00:21:00.074 11:35:05 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:21:00.074 * Looking for test storage... 00:21:00.074 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:21:00.074 11:35:05 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:00.074 11:35:05 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:21:00.074 11:35:05 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:00.074 11:35:05 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:00.074 11:35:05 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:00.074 11:35:05 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:00.074 11:35:05 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:00.074 11:35:05 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:21:00.074 11:35:05 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:21:00.074 11:35:05 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:21:00.074 11:35:05 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:21:00.074 11:35:05 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:21:00.074 11:35:05 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:21:00.074 11:35:05 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:21:00.074 11:35:05 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:00.074 11:35:05 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:21:00.074 11:35:05 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:21:00.074 11:35:05 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:00.074 11:35:05 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:00.074 11:35:05 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:21:00.074 11:35:05 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:21:00.074 11:35:05 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:00.074 11:35:05 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:21:00.074 11:35:05 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:21:00.074 11:35:05 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:21:00.074 11:35:05 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:21:00.074 11:35:05 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:00.074 11:35:05 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:21:00.074 11:35:05 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:21:00.074 11:35:05 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:00.074 11:35:05 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:00.074 11:35:05 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:21:00.074 11:35:05 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:00.074 11:35:05 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:00.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.074 --rc genhtml_branch_coverage=1 00:21:00.074 --rc genhtml_function_coverage=1 00:21:00.074 --rc genhtml_legend=1 00:21:00.074 --rc geninfo_all_blocks=1 00:21:00.074 --rc geninfo_unexecuted_blocks=1 00:21:00.074 00:21:00.074 ' 00:21:00.074 11:35:05 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:00.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.074 --rc genhtml_branch_coverage=1 00:21:00.074 --rc genhtml_function_coverage=1 00:21:00.074 --rc genhtml_legend=1 00:21:00.074 --rc geninfo_all_blocks=1 00:21:00.074 --rc geninfo_unexecuted_blocks=1 00:21:00.074 00:21:00.074 ' 00:21:00.074 11:35:05 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:00.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.074 --rc genhtml_branch_coverage=1 00:21:00.074 --rc genhtml_function_coverage=1 00:21:00.074 --rc genhtml_legend=1 00:21:00.074 --rc geninfo_all_blocks=1 00:21:00.074 --rc geninfo_unexecuted_blocks=1 00:21:00.074 00:21:00.074 ' 00:21:00.074 11:35:05 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:00.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.074 --rc genhtml_branch_coverage=1 00:21:00.074 --rc genhtml_function_coverage=1 00:21:00.074 --rc genhtml_legend=1 00:21:00.074 --rc geninfo_all_blocks=1 00:21:00.074 --rc geninfo_unexecuted_blocks=1 00:21:00.074 00:21:00.074 ' 00:21:00.074 11:35:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:00.074 11:35:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67428 00:21:00.074 11:35:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67428 00:21:00.074 11:35:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67460 00:21:00.074 11:35:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:21:00.074 11:35:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:21:00.074 11:35:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67460 00:21:00.074 11:35:05 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67460 ']' 00:21:00.074 11:35:05 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.074 11:35:05 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:00.074 11:35:05 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.074 11:35:05 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:00.074 11:35:05 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:21:00.074 [2024-11-20 11:35:05.772739] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:21:00.074 [2024-11-20 11:35:05.772978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67460 ] 00:21:00.333 [2024-11-20 11:35:05.962473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:00.591 [2024-11-20 11:35:06.147879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.591 [2024-11-20 11:35:06.147884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.529 11:35:07 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:01.529 11:35:07 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:21:01.529 Checking default timeout settings: 00:21:01.529 11:35:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:21:01.529 11:35:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:21:02.097 Making settings changes with rpc: 00:21:02.097 11:35:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:21:02.097 11:35:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:21:02.356 Check default vs. modified settings: 00:21:02.356 11:35:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:21:02.356 11:35:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:21:02.616 11:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:21:02.616 11:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:21:02.616 11:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67428 00:21:02.616 11:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:21:02.616 11:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:21:02.616 11:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:21:02.616 11:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67428 00:21:02.616 11:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:21:02.616 11:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:21:02.616 11:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:21:02.616 Setting action_on_timeout is changed as expected. 00:21:02.616 11:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:21:02.616 11:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:21:02.616 11:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:21:02.616 11:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67428 00:21:02.616 11:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:21:02.616 11:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:21:02.616 11:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:21:02.616 11:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67428 00:21:02.616 11:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:21:02.616 11:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:21:02.616 11:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:21:02.616 11:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:21:02.616 11:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:21:02.616 Setting timeout_us is changed as expected. 00:21:02.616 11:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:21:02.616 11:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67428 00:21:02.616 11:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:21:02.616 11:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:21:02.616 11:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:21:02.616 11:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67428 00:21:02.616 11:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:21:02.616 11:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:21:02.616 11:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:21:02.616 11:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:21:02.616 Setting timeout_admin_us is changed as expected. 00:21:02.616 11:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:21:02.616 11:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:21:02.616 11:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67428 /tmp/settings_modified_67428 00:21:02.616 11:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67460 00:21:02.616 11:35:08 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67460 ']' 00:21:02.616 11:35:08 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67460 00:21:02.616 11:35:08 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:21:02.616 11:35:08 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:02.616 11:35:08 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67460 00:21:02.875 11:35:08 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:02.875 killing process with pid 67460 00:21:02.876 11:35:08 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:02.876 11:35:08 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67460' 00:21:02.876 11:35:08 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67460 00:21:02.876 11:35:08 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67460 00:21:05.411 RPC TIMEOUT SETTING TEST PASSED. 00:21:05.411 11:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:21:05.411 00:21:05.411 real 0m5.257s 00:21:05.411 user 0m10.410s 00:21:05.411 sys 0m0.759s 00:21:05.411 11:35:10 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:05.411 ************************************ 00:21:05.411 END TEST nvme_rpc_timeouts 00:21:05.411 ************************************ 00:21:05.411 11:35:10 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:21:05.411 11:35:10 -- spdk/autotest.sh@239 -- # uname -s 00:21:05.411 11:35:10 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:21:05.411 11:35:10 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:21:05.411 11:35:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:05.411 11:35:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:05.411 11:35:10 -- common/autotest_common.sh@10 -- # set +x 00:21:05.411 ************************************ 00:21:05.411 START TEST sw_hotplug 00:21:05.411 ************************************ 00:21:05.411 11:35:10 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:21:05.411 * Looking for test storage... 00:21:05.411 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:21:05.411 11:35:10 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:05.411 11:35:10 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:21:05.411 11:35:10 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:05.411 11:35:10 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:05.411 11:35:10 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:05.411 11:35:10 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:05.411 11:35:10 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:05.411 11:35:10 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:21:05.411 11:35:10 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:21:05.411 11:35:10 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:21:05.411 11:35:10 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:21:05.411 11:35:10 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:21:05.411 11:35:10 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:21:05.411 11:35:10 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:21:05.411 11:35:10 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:05.411 11:35:10 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:21:05.411 11:35:10 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:21:05.411 11:35:10 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:05.411 11:35:10 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:05.411 11:35:10 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:21:05.411 11:35:10 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:21:05.411 11:35:10 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:05.411 11:35:10 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:21:05.411 11:35:10 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:21:05.411 11:35:10 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:21:05.411 11:35:10 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:21:05.411 11:35:10 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:05.411 11:35:10 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:21:05.411 11:35:10 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:21:05.411 11:35:10 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:05.411 11:35:10 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:05.411 11:35:10 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:21:05.411 11:35:10 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:05.411 11:35:10 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:05.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.411 --rc genhtml_branch_coverage=1 00:21:05.411 --rc genhtml_function_coverage=1 00:21:05.411 --rc genhtml_legend=1 00:21:05.411 --rc geninfo_all_blocks=1 00:21:05.411 --rc geninfo_unexecuted_blocks=1 00:21:05.411 00:21:05.411 ' 00:21:05.411 11:35:10 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:05.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.411 --rc genhtml_branch_coverage=1 00:21:05.411 --rc genhtml_function_coverage=1 00:21:05.411 --rc genhtml_legend=1 00:21:05.411 --rc geninfo_all_blocks=1 00:21:05.411 --rc geninfo_unexecuted_blocks=1 00:21:05.411 00:21:05.411 ' 00:21:05.411 11:35:10 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:05.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.411 --rc genhtml_branch_coverage=1 00:21:05.411 --rc genhtml_function_coverage=1 00:21:05.411 --rc genhtml_legend=1 00:21:05.411 --rc geninfo_all_blocks=1 00:21:05.411 --rc geninfo_unexecuted_blocks=1 00:21:05.411 00:21:05.411 ' 00:21:05.411 11:35:10 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:05.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.411 --rc genhtml_branch_coverage=1 00:21:05.411 --rc genhtml_function_coverage=1 00:21:05.411 --rc genhtml_legend=1 00:21:05.411 --rc geninfo_all_blocks=1 00:21:05.411 --rc geninfo_unexecuted_blocks=1 00:21:05.411 00:21:05.411 ' 00:21:05.411 11:35:10 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:05.671 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:05.671 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:05.671 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:05.671 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:05.671 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:05.671 11:35:11 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:21:05.671 11:35:11 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:21:05.671 11:35:11 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:21:05.671 11:35:11 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@233 -- # local class 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@18 -- # local i 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@18 -- # local i 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@18 -- # local i 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@18 -- # local i 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:21:05.671 11:35:11 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:21:05.671 11:35:11 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:21:05.671 11:35:11 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:21:05.671 11:35:11 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:05.930 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:06.189 Waiting for block devices as requested 00:21:06.189 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:06.189 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:06.447 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:21:06.447 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:21:11.711 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:21:11.711 11:35:17 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:21:11.711 11:35:17 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:11.969 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:21:11.969 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:11.969 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:21:12.227 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:21:12.502 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:12.502 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:12.502 11:35:18 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:21:12.502 11:35:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:21:12.502 11:35:18 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:21:12.502 11:35:18 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:21:12.502 11:35:18 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68330 00:21:12.502 11:35:18 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:21:12.502 11:35:18 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:21:12.502 11:35:18 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:21:12.502 11:35:18 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:21:12.502 11:35:18 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:21:12.502 11:35:18 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:21:12.502 11:35:18 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:21:12.502 11:35:18 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:21:12.502 11:35:18 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:21:12.502 11:35:18 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:21:12.502 11:35:18 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:21:12.502 11:35:18 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:21:12.502 11:35:18 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:21:12.502 11:35:18 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:21:12.760 Initializing NVMe Controllers 00:21:12.760 Attaching to 0000:00:10.0 00:21:12.760 Attaching to 0000:00:11.0 00:21:12.760 Attached to 0000:00:10.0 00:21:12.760 Attached to 0000:00:11.0 00:21:12.760 Initialization complete. Starting I/O... 00:21:12.760 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:21:12.760 QEMU NVMe Ctrl (12341 ): 4 I/Os completed (+4) 00:21:12.760 00:21:14.137 QEMU NVMe Ctrl (12340 ): 1082 I/Os completed (+1082) 00:21:14.137 QEMU NVMe Ctrl (12341 ): 1298 I/Os completed (+1294) 00:21:14.137 00:21:15.072 QEMU NVMe Ctrl (12340 ): 3435 I/Os completed (+2353) 00:21:15.072 QEMU NVMe Ctrl (12341 ): 3564 I/Os completed (+2266) 00:21:15.072 00:21:16.007 QEMU NVMe Ctrl (12340 ): 4984 I/Os completed (+1549) 00:21:16.007 QEMU NVMe Ctrl (12341 ): 5227 I/Os completed (+1663) 00:21:16.007 00:21:16.941 QEMU NVMe Ctrl (12340 ): 6283 I/Os completed (+1299) 00:21:16.941 QEMU NVMe Ctrl (12341 ): 7016 I/Os completed (+1789) 00:21:16.941 00:21:17.946 QEMU NVMe Ctrl (12340 ): 7721 I/Os completed (+1438) 00:21:17.946 QEMU NVMe Ctrl (12341 ): 8719 I/Os completed (+1703) 00:21:17.946 00:21:18.511 11:35:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:21:18.511 11:35:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:21:18.511 11:35:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:21:18.511 [2024-11-20 11:35:24.260620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:21:18.511 Controller removed: QEMU NVMe Ctrl (12340 ) 00:21:18.511 [2024-11-20 11:35:24.262909] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:18.511 [2024-11-20 11:35:24.263135] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:18.511 [2024-11-20 11:35:24.263178] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:18.511 [2024-11-20 11:35:24.263206] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:18.511 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:21:18.511 [2024-11-20 11:35:24.266318] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:18.511 [2024-11-20 11:35:24.266380] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:18.511 [2024-11-20 11:35:24.266406] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:18.511 [2024-11-20 11:35:24.266432] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:18.768 11:35:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:21:18.768 11:35:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:21:18.768 [2024-11-20 11:35:24.296229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:21:18.768 Controller removed: QEMU NVMe Ctrl (12341 ) 00:21:18.769 [2024-11-20 11:35:24.298306] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:18.769 [2024-11-20 11:35:24.298380] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:18.769 [2024-11-20 11:35:24.298415] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:18.769 [2024-11-20 11:35:24.298440] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:18.769 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:21:18.769 [2024-11-20 11:35:24.301761] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:18.769 [2024-11-20 11:35:24.301827] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:18.769 [2024-11-20 11:35:24.301857] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:18.769 [2024-11-20 11:35:24.301878] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:18.769 11:35:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:21:18.769 11:35:24 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:21:18.769 11:35:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:21:18.769 11:35:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:21:18.769 11:35:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:21:18.769 11:35:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:21:18.769 00:21:18.769 11:35:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:21:18.769 11:35:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:21:18.769 11:35:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:21:18.769 11:35:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:21:18.769 Attaching to 0000:00:10.0 00:21:18.769 Attached to 0000:00:10.0 00:21:19.027 11:35:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:21:19.027 11:35:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:21:19.027 11:35:24 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:21:19.027 Attaching to 0000:00:11.0 00:21:19.027 Attached to 0000:00:11.0 00:21:19.959 QEMU NVMe Ctrl (12340 ): 1519 I/Os completed (+1519) 00:21:19.959 QEMU NVMe Ctrl (12341 ): 1667 I/Os completed (+1667) 00:21:19.959 00:21:20.893 QEMU NVMe Ctrl (12340 ): 3200 I/Os completed (+1681) 00:21:20.893 QEMU NVMe Ctrl (12341 ): 3665 I/Os completed (+1998) 00:21:20.893 00:21:21.828 QEMU NVMe Ctrl (12340 ): 4704 I/Os completed (+1504) 00:21:21.828 QEMU NVMe Ctrl (12341 ): 5257 I/Os completed (+1592) 00:21:21.828 00:21:22.763 QEMU NVMe Ctrl (12340 ): 6240 I/Os completed (+1536) 00:21:22.763 QEMU NVMe Ctrl (12341 ): 6927 I/Os completed (+1670) 00:21:22.763 00:21:24.140 QEMU NVMe Ctrl (12340 ): 7804 I/Os completed (+1564) 00:21:24.140 QEMU NVMe Ctrl (12341 ): 8569 I/Os completed (+1642) 00:21:24.140 00:21:25.074 QEMU NVMe Ctrl (12340 ): 9529 I/Os completed (+1725) 00:21:25.074 QEMU NVMe Ctrl (12341 ): 10326 I/Os completed (+1757) 00:21:25.074 00:21:26.009 QEMU NVMe Ctrl (12340 ): 11185 I/Os completed (+1656) 00:21:26.009 QEMU NVMe Ctrl (12341 ): 12058 I/Os completed (+1732) 00:21:26.009 00:21:26.946 QEMU NVMe Ctrl (12340 ): 12733 I/Os completed (+1548) 00:21:26.946 QEMU NVMe Ctrl (12341 ): 13703 I/Os completed (+1645) 00:21:26.946 00:21:27.880 QEMU NVMe Ctrl (12340 ): 14192 I/Os completed (+1459) 00:21:27.880 QEMU NVMe Ctrl (12341 ): 15382 I/Os completed (+1679) 00:21:27.880 00:21:28.815 QEMU NVMe Ctrl (12340 ): 15812 I/Os completed (+1620) 00:21:28.815 QEMU NVMe Ctrl (12341 ): 17154 I/Os completed (+1772) 00:21:28.815 00:21:29.751 QEMU NVMe Ctrl (12340 ): 17548 I/Os completed (+1736) 00:21:29.751 QEMU NVMe Ctrl (12341 ): 18913 I/Os completed (+1759) 00:21:29.751 00:21:31.127 QEMU NVMe Ctrl (12340 ): 19316 I/Os completed (+1768) 00:21:31.127 QEMU NVMe Ctrl (12341 ): 20710 I/Os completed (+1797) 00:21:31.127 00:21:31.127 11:35:36 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:21:31.127 11:35:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:21:31.127 11:35:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:21:31.127 11:35:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:21:31.127 [2024-11-20 11:35:36.599554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:21:31.127 Controller removed: QEMU NVMe Ctrl (12340 ) 00:21:31.127 [2024-11-20 11:35:36.601558] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:31.127 [2024-11-20 11:35:36.601625] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:31.127 [2024-11-20 11:35:36.601656] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:31.127 [2024-11-20 11:35:36.601683] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:31.127 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:21:31.127 [2024-11-20 11:35:36.604628] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:31.127 [2024-11-20 11:35:36.604688] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:31.127 [2024-11-20 11:35:36.604715] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:31.127 [2024-11-20 11:35:36.604738] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:31.127 11:35:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:21:31.127 11:35:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:21:31.127 [2024-11-20 11:35:36.632206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:21:31.127 Controller removed: QEMU NVMe Ctrl (12341 ) 00:21:31.127 [2024-11-20 11:35:36.634044] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:31.127 [2024-11-20 11:35:36.634102] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:31.127 [2024-11-20 11:35:36.634136] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:31.127 [2024-11-20 11:35:36.634160] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:31.127 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:21:31.127 [2024-11-20 11:35:36.636790] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:31.127 [2024-11-20 11:35:36.636841] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:31.127 [2024-11-20 11:35:36.636867] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:31.127 [2024-11-20 11:35:36.636889] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:31.127 11:35:36 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:21:31.127 11:35:36 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:21:31.127 11:35:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:21:31.127 11:35:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:21:31.127 11:35:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:21:31.127 11:35:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:21:31.127 11:35:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:21:31.127 11:35:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:21:31.127 11:35:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:21:31.127 11:35:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:21:31.127 Attaching to 0000:00:10.0 00:21:31.127 Attached to 0000:00:10.0 00:21:31.127 11:35:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:21:31.127 11:35:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:21:31.127 11:35:36 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:21:31.127 Attaching to 0000:00:11.0 00:21:31.127 Attached to 0000:00:11.0 00:21:32.062 QEMU NVMe Ctrl (12340 ): 1276 I/Os completed (+1276) 00:21:32.062 QEMU NVMe Ctrl (12341 ): 1128 I/Os completed (+1128) 00:21:32.062 00:21:32.997 QEMU NVMe Ctrl (12340 ): 2976 I/Os completed (+1700) 00:21:32.997 QEMU NVMe Ctrl (12341 ): 2878 I/Os completed (+1750) 00:21:32.997 00:21:33.933 QEMU NVMe Ctrl (12340 ): 4536 I/Os completed (+1560) 00:21:33.933 QEMU NVMe Ctrl (12341 ): 4553 I/Os completed (+1675) 00:21:33.933 00:21:34.911 QEMU NVMe Ctrl (12340 ): 6101 I/Os completed (+1565) 00:21:34.911 QEMU NVMe Ctrl (12341 ): 6252 I/Os completed (+1699) 00:21:34.911 00:21:35.844 QEMU NVMe Ctrl (12340 ): 7749 I/Os completed (+1648) 00:21:35.844 QEMU NVMe Ctrl (12341 ): 7965 I/Os completed (+1713) 00:21:35.844 00:21:36.780 QEMU NVMe Ctrl (12340 ): 9361 I/Os completed (+1612) 00:21:36.780 QEMU NVMe Ctrl (12341 ): 9640 I/Os completed (+1675) 00:21:36.780 00:21:38.154 QEMU NVMe Ctrl (12340 ): 11000 I/Os completed (+1639) 00:21:38.154 QEMU NVMe Ctrl (12341 ): 11338 I/Os completed (+1698) 00:21:38.154 00:21:39.089 QEMU NVMe Ctrl (12340 ): 12688 I/Os completed (+1688) 00:21:39.089 QEMU NVMe Ctrl (12341 ): 13065 I/Os completed (+1727) 00:21:39.089 00:21:40.032 QEMU NVMe Ctrl (12340 ): 14392 I/Os completed (+1704) 00:21:40.032 QEMU NVMe Ctrl (12341 ): 14827 I/Os completed (+1762) 00:21:40.032 00:21:40.968 QEMU NVMe Ctrl (12340 ): 16076 I/Os completed (+1684) 00:21:40.968 QEMU NVMe Ctrl (12341 ): 16561 I/Os completed (+1734) 00:21:40.968 00:21:41.905 QEMU NVMe Ctrl (12340 ): 17716 I/Os completed (+1640) 00:21:41.905 QEMU NVMe Ctrl (12341 ): 18226 I/Os completed (+1665) 00:21:41.905 00:21:42.841 QEMU NVMe Ctrl (12340 ): 19058 I/Os completed (+1342) 00:21:42.841 QEMU NVMe Ctrl (12341 ): 19635 I/Os completed (+1409) 00:21:42.841 00:21:43.409 11:35:48 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:21:43.409 11:35:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:21:43.409 11:35:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:21:43.409 11:35:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:21:43.409 [2024-11-20 11:35:48.889972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:21:43.409 Controller removed: QEMU NVMe Ctrl (12340 ) 00:21:43.409 [2024-11-20 11:35:48.894103] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:43.409 [2024-11-20 11:35:48.894228] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:43.409 [2024-11-20 11:35:48.894282] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:43.409 [2024-11-20 11:35:48.894334] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:43.409 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:21:43.409 [2024-11-20 11:35:48.898375] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:43.409 [2024-11-20 11:35:48.898449] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:43.409 [2024-11-20 11:35:48.898480] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:43.409 [2024-11-20 11:35:48.898509] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:43.409 11:35:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:21:43.409 11:35:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:21:43.409 [2024-11-20 11:35:48.910884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:21:43.409 Controller removed: QEMU NVMe Ctrl (12341 ) 00:21:43.409 [2024-11-20 11:35:48.913119] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:43.409 [2024-11-20 11:35:48.913202] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:43.409 [2024-11-20 11:35:48.913241] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:43.409 [2024-11-20 11:35:48.913272] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:43.409 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:21:43.409 [2024-11-20 11:35:48.916779] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:43.409 [2024-11-20 11:35:48.916843] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:43.409 [2024-11-20 11:35:48.916879] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:43.409 [2024-11-20 11:35:48.916909] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:43.409 11:35:48 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:21:43.409 11:35:48 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:21:43.409 11:35:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:21:43.409 11:35:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:21:43.409 11:35:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:21:43.409 11:35:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:21:43.409 11:35:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:21:43.409 11:35:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:21:43.409 11:35:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:21:43.409 11:35:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:21:43.409 Attaching to 0000:00:10.0 00:21:43.409 Attached to 0000:00:10.0 00:21:43.668 11:35:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:21:43.668 11:35:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:21:43.668 11:35:49 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:21:43.668 Attaching to 0000:00:11.0 00:21:43.668 Attached to 0000:00:11.0 00:21:43.668 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:21:43.668 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:21:43.668 [2024-11-20 11:35:49.219645] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:21:55.935 11:36:01 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:21:55.935 11:36:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:21:55.935 11:36:01 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.95 00:21:55.935 11:36:01 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.95 00:21:55.935 11:36:01 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:21:55.935 11:36:01 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.95 00:21:55.935 11:36:01 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.95 2 00:21:55.935 remove_attach_helper took 42.95s to complete (handling 2 nvme drive(s)) 11:36:01 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:22:02.497 11:36:07 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68330 00:22:02.497 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68330) - No such process 00:22:02.497 11:36:07 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68330 00:22:02.497 11:36:07 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:22:02.497 11:36:07 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:22:02.497 11:36:07 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:22:02.497 11:36:07 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68878 00:22:02.497 11:36:07 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:02.497 11:36:07 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:22:02.497 11:36:07 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68878 00:22:02.497 11:36:07 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 68878 ']' 00:22:02.497 11:36:07 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.497 11:36:07 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:02.497 11:36:07 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.497 11:36:07 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:02.497 11:36:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:02.497 [2024-11-20 11:36:07.338129] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:22:02.497 [2024-11-20 11:36:07.338265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68878 ] 00:22:02.497 [2024-11-20 11:36:07.517888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.497 [2024-11-20 11:36:07.663210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.755 11:36:08 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:02.755 11:36:08 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:22:02.755 11:36:08 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:22:02.755 11:36:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.755 11:36:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:02.755 11:36:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.755 11:36:08 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:22:02.755 11:36:08 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:22:02.755 11:36:08 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:22:02.755 11:36:08 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:22:02.755 11:36:08 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:22:02.755 11:36:08 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:22:02.755 11:36:08 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:22:02.755 11:36:08 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:22:02.755 11:36:08 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:22:02.755 11:36:08 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:22:02.755 11:36:08 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:22:02.755 11:36:08 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:22:02.755 11:36:08 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:22:09.314 11:36:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:22:09.314 11:36:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:09.314 11:36:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:09.314 11:36:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:09.314 11:36:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:09.314 11:36:14 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:22:09.314 11:36:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:09.314 11:36:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:09.314 11:36:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:09.314 11:36:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:09.314 11:36:14 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.314 11:36:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:09.314 11:36:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:09.314 11:36:14 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.314 [2024-11-20 11:36:14.589223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:22:09.314 [2024-11-20 11:36:14.592103] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:09.314 [2024-11-20 11:36:14.592210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.314 [2024-11-20 11:36:14.592235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.314 [2024-11-20 11:36:14.592266] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:09.314 [2024-11-20 11:36:14.592283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.314 [2024-11-20 11:36:14.592300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.314 [2024-11-20 11:36:14.592315] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:09.314 [2024-11-20 11:36:14.592332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.314 [2024-11-20 11:36:14.592346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.314 [2024-11-20 11:36:14.592368] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:09.314 [2024-11-20 11:36:14.592382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.314 [2024-11-20 11:36:14.592399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.314 11:36:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:22:09.314 11:36:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:22:09.314 [2024-11-20 11:36:14.989297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:22:09.314 [2024-11-20 11:36:14.993280] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:09.314 [2024-11-20 11:36:14.993361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.314 [2024-11-20 11:36:14.993388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.314 [2024-11-20 11:36:14.993433] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:09.314 [2024-11-20 11:36:14.993450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.314 [2024-11-20 11:36:14.993464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.314 [2024-11-20 11:36:14.993481] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:09.314 [2024-11-20 11:36:14.993494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.314 [2024-11-20 11:36:14.993509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.314 [2024-11-20 11:36:14.993523] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:09.314 [2024-11-20 11:36:14.993560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.314 [2024-11-20 11:36:14.993573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.575 11:36:15 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:22:09.575 11:36:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:09.575 11:36:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:09.575 11:36:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:09.575 11:36:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:09.575 11:36:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:09.575 11:36:15 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.575 11:36:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:09.575 11:36:15 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.575 11:36:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:22:09.575 11:36:15 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:22:09.575 11:36:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:09.575 11:36:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:09.575 11:36:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:22:09.838 11:36:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:22:09.838 11:36:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:09.838 11:36:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:09.838 11:36:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:09.838 11:36:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:22:09.838 11:36:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:22:09.838 11:36:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:09.838 11:36:15 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:22:22.045 11:36:27 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:22:22.045 11:36:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:22:22.045 11:36:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:22:22.045 11:36:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:22.045 11:36:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:22.045 11:36:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:22.045 11:36:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.045 11:36:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:22.045 11:36:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.045 11:36:27 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:22:22.045 11:36:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:22:22.045 11:36:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:22.045 11:36:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:22.045 11:36:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:22.045 11:36:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:22.045 11:36:27 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:22:22.045 11:36:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:22.045 11:36:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:22.045 11:36:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:22.045 11:36:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:22.045 11:36:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:22.045 11:36:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.045 11:36:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:22.045 [2024-11-20 11:36:27.591317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:22:22.045 [2024-11-20 11:36:27.594717] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:22.045 [2024-11-20 11:36:27.594776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.045 [2024-11-20 11:36:27.594797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.045 [2024-11-20 11:36:27.594842] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:22.045 [2024-11-20 11:36:27.594865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.045 [2024-11-20 11:36:27.594883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.045 [2024-11-20 11:36:27.594915] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:22.045 [2024-11-20 11:36:27.594932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.045 [2024-11-20 11:36:27.594946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.045 [2024-11-20 11:36:27.594964] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:22.045 [2024-11-20 11:36:27.594977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.045 [2024-11-20 11:36:27.594998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.045 11:36:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.045 11:36:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:22:22.045 11:36:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:22:22.306 [2024-11-20 11:36:27.991278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:22:22.306 [2024-11-20 11:36:27.994412] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:22.306 [2024-11-20 11:36:27.994460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.306 [2024-11-20 11:36:27.994504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.306 [2024-11-20 11:36:27.994562] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:22.306 [2024-11-20 11:36:27.994625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.306 [2024-11-20 11:36:27.994643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.306 [2024-11-20 11:36:27.994662] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:22.306 [2024-11-20 11:36:27.994692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.306 [2024-11-20 11:36:27.994708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.306 [2024-11-20 11:36:27.994722] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:22.306 [2024-11-20 11:36:27.994738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.306 [2024-11-20 11:36:27.994751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.579 11:36:28 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:22:22.579 11:36:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:22.579 11:36:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:22.579 11:36:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:22.579 11:36:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:22.579 11:36:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:22.579 11:36:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.579 11:36:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:22.579 11:36:28 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.579 11:36:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:22:22.579 11:36:28 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:22:22.579 11:36:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:22.579 11:36:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:22.579 11:36:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:22:22.838 11:36:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:22:22.838 11:36:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:22.838 11:36:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:22.838 11:36:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:22.838 11:36:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:22:22.838 11:36:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:22:22.838 11:36:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:22.838 11:36:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:22:35.078 11:36:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:22:35.078 11:36:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:22:35.078 11:36:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:22:35.078 11:36:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:35.078 11:36:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:35.078 11:36:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:35.078 11:36:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.078 11:36:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:35.078 11:36:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.078 11:36:40 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:22:35.078 11:36:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:22:35.078 11:36:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:35.078 11:36:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:35.078 11:36:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:35.078 11:36:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:35.078 [2024-11-20 11:36:40.591516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:22:35.078 [2024-11-20 11:36:40.595063] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:35.078 [2024-11-20 11:36:40.595168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.078 [2024-11-20 11:36:40.595197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.079 [2024-11-20 11:36:40.595233] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:35.079 [2024-11-20 11:36:40.595251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.079 [2024-11-20 11:36:40.595287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.079 [2024-11-20 11:36:40.595304] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:35.079 [2024-11-20 11:36:40.595325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.079 [2024-11-20 11:36:40.595341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.079 [2024-11-20 11:36:40.595363] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:35.079 [2024-11-20 11:36:40.595379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.079 [2024-11-20 11:36:40.595396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.079 11:36:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:22:35.079 11:36:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:35.079 11:36:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:35.079 11:36:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:35.079 11:36:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:35.079 11:36:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:35.079 11:36:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.079 11:36:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:35.079 11:36:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.079 11:36:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:22:35.079 11:36:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:22:35.337 [2024-11-20 11:36:41.091491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:22:35.337 [2024-11-20 11:36:41.094397] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:35.337 [2024-11-20 11:36:41.094457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.337 [2024-11-20 11:36:41.094510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.337 [2024-11-20 11:36:41.094535] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:35.337 [2024-11-20 11:36:41.094552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.337 [2024-11-20 11:36:41.094582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.337 [2024-11-20 11:36:41.094613] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:35.337 [2024-11-20 11:36:41.094629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.337 [2024-11-20 11:36:41.094653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.337 [2024-11-20 11:36:41.094668] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:35.337 [2024-11-20 11:36:41.094683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.337 [2024-11-20 11:36:41.094696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.596 11:36:41 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:22:35.596 11:36:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:35.596 11:36:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:35.596 11:36:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:35.596 11:36:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:35.596 11:36:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:35.596 11:36:41 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.596 11:36:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:35.596 11:36:41 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.596 11:36:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:22:35.596 11:36:41 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:22:35.596 11:36:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:35.596 11:36:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:35.596 11:36:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:22:35.855 11:36:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:22:35.855 11:36:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:35.855 11:36:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:35.855 11:36:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:35.855 11:36:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:22:35.855 11:36:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:22:35.855 11:36:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:35.855 11:36:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:22:48.058 11:36:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:22:48.058 11:36:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:22:48.058 11:36:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:22:48.058 11:36:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:48.058 11:36:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:48.058 11:36:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:48.058 11:36:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.058 11:36:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:48.058 11:36:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.058 11:36:53 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:22:48.058 11:36:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:22:48.058 11:36:53 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.07 00:22:48.058 11:36:53 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.07 00:22:48.058 11:36:53 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:22:48.058 11:36:53 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.07 00:22:48.058 11:36:53 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.07 2 00:22:48.058 remove_attach_helper took 45.07s to complete (handling 2 nvme drive(s)) 11:36:53 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:22:48.058 11:36:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.058 11:36:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:48.058 11:36:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.058 11:36:53 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:22:48.058 11:36:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.058 11:36:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:48.058 11:36:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.058 11:36:53 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:22:48.058 11:36:53 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:22:48.058 11:36:53 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:22:48.058 11:36:53 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:22:48.058 11:36:53 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:22:48.058 11:36:53 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:22:48.058 11:36:53 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:22:48.058 11:36:53 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:22:48.058 11:36:53 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:22:48.058 11:36:53 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:22:48.058 11:36:53 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:22:48.058 11:36:53 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:22:48.058 11:36:53 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:22:54.750 11:36:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:22:54.750 11:36:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:54.750 11:36:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:54.750 11:36:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:54.750 11:36:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:54.750 11:36:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:22:54.750 11:36:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:54.750 11:36:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:54.750 11:36:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:54.750 11:36:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:54.750 11:36:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:54.750 11:36:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.750 11:36:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:54.750 11:36:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.750 11:36:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:22:54.750 11:36:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:22:54.750 [2024-11-20 11:36:59.695116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:22:54.750 [2024-11-20 11:36:59.697149] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:54.750 [2024-11-20 11:36:59.697199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.750 [2024-11-20 11:36:59.697221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.750 [2024-11-20 11:36:59.697254] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:54.750 [2024-11-20 11:36:59.697271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.750 [2024-11-20 11:36:59.697288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.750 [2024-11-20 11:36:59.697304] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:54.750 [2024-11-20 11:36:59.697320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.750 [2024-11-20 11:36:59.697335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.750 [2024-11-20 11:36:59.697368] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:54.750 [2024-11-20 11:36:59.697397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.750 [2024-11-20 11:36:59.697427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.750 [2024-11-20 11:37:00.095079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:22:54.750 [2024-11-20 11:37:00.097492] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:54.750 [2024-11-20 11:37:00.097558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.750 [2024-11-20 11:37:00.097584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.750 [2024-11-20 11:37:00.097603] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:54.750 [2024-11-20 11:37:00.097619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.750 [2024-11-20 11:37:00.097632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.750 [2024-11-20 11:37:00.097648] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:54.750 [2024-11-20 11:37:00.097661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.750 [2024-11-20 11:37:00.097675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.750 [2024-11-20 11:37:00.097689] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:54.750 [2024-11-20 11:37:00.097704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.750 [2024-11-20 11:37:00.097716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.750 11:37:00 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:22:54.750 11:37:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:54.750 11:37:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:54.750 11:37:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:54.750 11:37:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:54.750 11:37:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:54.750 11:37:00 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.750 11:37:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:54.750 11:37:00 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.750 11:37:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:22:54.750 11:37:00 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:22:54.750 11:37:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:54.750 11:37:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:54.750 11:37:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:22:54.750 11:37:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:22:54.750 11:37:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:54.750 11:37:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:54.750 11:37:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:54.750 11:37:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:22:55.010 11:37:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:22:55.010 11:37:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:55.010 11:37:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:23:07.276 11:37:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:23:07.276 11:37:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:23:07.276 11:37:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:23:07.276 11:37:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:07.276 11:37:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:07.276 11:37:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.276 11:37:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:07.276 11:37:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:07.276 11:37:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.276 11:37:12 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:23:07.276 11:37:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:23:07.276 11:37:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:23:07.276 11:37:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:23:07.276 11:37:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:23:07.276 11:37:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:23:07.276 11:37:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:23:07.276 11:37:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:23:07.276 11:37:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:23:07.276 11:37:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:07.276 11:37:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:07.276 11:37:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:07.276 11:37:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.276 11:37:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:07.276 [2024-11-20 11:37:12.695375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:23:07.276 [2024-11-20 11:37:12.697986] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:07.276 [2024-11-20 11:37:12.698051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.276 [2024-11-20 11:37:12.698083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.276 [2024-11-20 11:37:12.698114] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:07.276 [2024-11-20 11:37:12.698130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.276 [2024-11-20 11:37:12.698147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.276 [2024-11-20 11:37:12.698162] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:07.276 [2024-11-20 11:37:12.698178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.276 [2024-11-20 11:37:12.698192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.276 [2024-11-20 11:37:12.698209] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:07.276 [2024-11-20 11:37:12.698256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.276 [2024-11-20 11:37:12.698272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.276 11:37:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.276 11:37:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:23:07.276 11:37:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:23:07.533 [2024-11-20 11:37:13.095388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:23:07.533 [2024-11-20 11:37:13.098499] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:07.533 [2024-11-20 11:37:13.098571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.533 [2024-11-20 11:37:13.098598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.533 [2024-11-20 11:37:13.098624] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:07.533 [2024-11-20 11:37:13.098646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.533 [2024-11-20 11:37:13.098660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.533 [2024-11-20 11:37:13.098678] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:07.533 [2024-11-20 11:37:13.098691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.533 [2024-11-20 11:37:13.098707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.533 [2024-11-20 11:37:13.098723] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:07.533 [2024-11-20 11:37:13.098740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.533 [2024-11-20 11:37:13.098753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.533 11:37:13 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:23:07.533 11:37:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:23:07.533 11:37:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:23:07.533 11:37:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:07.533 11:37:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:07.533 11:37:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:07.533 11:37:13 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.533 11:37:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:07.533 11:37:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.790 11:37:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:23:07.790 11:37:13 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:23:07.790 11:37:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:23:07.790 11:37:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:23:07.790 11:37:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:23:07.790 11:37:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:23:07.790 11:37:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:23:07.790 11:37:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:23:07.790 11:37:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:23:07.790 11:37:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:23:08.047 11:37:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:23:08.047 11:37:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:23:08.047 11:37:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:23:20.241 11:37:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:23:20.241 11:37:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:23:20.241 11:37:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:23:20.241 11:37:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:20.241 11:37:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:20.241 11:37:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:20.241 11:37:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.241 11:37:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:20.241 11:37:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.241 11:37:25 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:23:20.241 11:37:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:23:20.241 11:37:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:23:20.241 11:37:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:23:20.241 11:37:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:23:20.241 11:37:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:23:20.241 11:37:25 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:23:20.241 11:37:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:23:20.241 11:37:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:23:20.241 [2024-11-20 11:37:25.695598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:23:20.241 11:37:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:20.241 11:37:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:20.241 [2024-11-20 11:37:25.697704] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:20.241 [2024-11-20 11:37:25.697752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.241 [2024-11-20 11:37:25.697775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.241 [2024-11-20 11:37:25.697806] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:20.241 [2024-11-20 11:37:25.697823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.241 [2024-11-20 11:37:25.697841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.241 [2024-11-20 11:37:25.697858] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:20.241 [2024-11-20 11:37:25.697918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.241 [2024-11-20 11:37:25.697933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.241 [2024-11-20 11:37:25.697950] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:20.241 [2024-11-20 11:37:25.697964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.241 [2024-11-20 11:37:25.697980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.241 11:37:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:20.241 11:37:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.241 11:37:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:20.241 11:37:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.241 11:37:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:23:20.241 11:37:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:23:20.500 [2024-11-20 11:37:26.095613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:23:20.500 [2024-11-20 11:37:26.098455] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:20.500 [2024-11-20 11:37:26.098522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.500 [2024-11-20 11:37:26.098558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.500 [2024-11-20 11:37:26.098586] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:20.500 [2024-11-20 11:37:26.098605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.500 [2024-11-20 11:37:26.098619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.500 [2024-11-20 11:37:26.098635] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:20.500 [2024-11-20 11:37:26.098648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.500 [2024-11-20 11:37:26.098663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.500 [2024-11-20 11:37:26.098679] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:20.500 [2024-11-20 11:37:26.098697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.500 [2024-11-20 11:37:26.098710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.500 11:37:26 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:23:20.500 11:37:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:23:20.500 11:37:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:23:20.500 11:37:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:20.500 11:37:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:20.500 11:37:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:20.500 11:37:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.500 11:37:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:20.759 11:37:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.759 11:37:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:23:20.759 11:37:26 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:23:20.759 11:37:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:23:20.759 11:37:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:23:20.759 11:37:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:23:20.759 11:37:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:23:20.759 11:37:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:23:20.759 11:37:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:23:20.759 11:37:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:23:20.759 11:37:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:23:21.018 11:37:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:23:21.018 11:37:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:23:21.018 11:37:26 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:23:33.286 11:37:38 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:23:33.286 11:37:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:23:33.286 11:37:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:23:33.286 11:37:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:33.286 11:37:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:33.286 11:37:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:33.286 11:37:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.286 11:37:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:33.286 11:37:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.286 11:37:38 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:23:33.286 11:37:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:23:33.286 11:37:38 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.05 00:23:33.286 11:37:38 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.05 00:23:33.286 11:37:38 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:23:33.286 11:37:38 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.05 00:23:33.286 11:37:38 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.05 2 00:23:33.286 remove_attach_helper took 45.05s to complete (handling 2 nvme drive(s)) 11:37:38 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:23:33.286 11:37:38 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68878 00:23:33.286 11:37:38 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 68878 ']' 00:23:33.286 11:37:38 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 68878 00:23:33.286 11:37:38 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:23:33.286 11:37:38 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:33.286 11:37:38 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68878 00:23:33.286 11:37:38 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:33.286 11:37:38 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:33.286 11:37:38 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68878' 00:23:33.286 killing process with pid 68878 00:23:33.286 11:37:38 sw_hotplug -- common/autotest_common.sh@973 -- # kill 68878 00:23:33.286 11:37:38 sw_hotplug -- common/autotest_common.sh@978 -- # wait 68878 00:23:35.185 11:37:40 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:35.444 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:36.011 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:36.011 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:36.011 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:23:36.011 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:23:36.269 00:23:36.269 real 2m31.097s 00:23:36.269 user 1m51.887s 00:23:36.269 sys 0m18.900s 00:23:36.269 11:37:41 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:36.269 11:37:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:36.269 ************************************ 00:23:36.269 END TEST sw_hotplug 00:23:36.269 ************************************ 00:23:36.269 11:37:41 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:23:36.269 11:37:41 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:23:36.269 11:37:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:36.269 11:37:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:36.269 11:37:41 -- common/autotest_common.sh@10 -- # set +x 00:23:36.269 ************************************ 00:23:36.269 START TEST nvme_xnvme 00:23:36.269 ************************************ 00:23:36.269 11:37:41 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:23:36.269 * Looking for test storage... 00:23:36.269 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:23:36.269 11:37:41 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:36.269 11:37:41 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:23:36.269 11:37:41 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:36.533 11:37:42 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:36.533 11:37:42 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:36.533 11:37:42 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:36.533 11:37:42 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:36.533 11:37:42 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:23:36.533 11:37:42 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:23:36.533 11:37:42 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:23:36.533 11:37:42 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:23:36.533 11:37:42 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:23:36.533 11:37:42 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:23:36.533 11:37:42 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:23:36.533 11:37:42 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:36.533 11:37:42 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:23:36.533 11:37:42 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:23:36.533 11:37:42 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:36.533 11:37:42 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:36.533 11:37:42 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:23:36.533 11:37:42 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:23:36.533 11:37:42 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:36.533 11:37:42 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:23:36.533 11:37:42 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:23:36.533 11:37:42 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:23:36.533 11:37:42 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:23:36.533 11:37:42 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:36.533 11:37:42 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:23:36.533 11:37:42 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:23:36.533 11:37:42 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:36.533 11:37:42 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:36.533 11:37:42 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:23:36.533 11:37:42 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:36.533 11:37:42 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:36.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.533 --rc genhtml_branch_coverage=1 00:23:36.533 --rc genhtml_function_coverage=1 00:23:36.533 --rc genhtml_legend=1 00:23:36.533 --rc geninfo_all_blocks=1 00:23:36.533 --rc geninfo_unexecuted_blocks=1 00:23:36.533 00:23:36.533 ' 00:23:36.533 11:37:42 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:36.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.533 --rc genhtml_branch_coverage=1 00:23:36.533 --rc genhtml_function_coverage=1 00:23:36.533 --rc genhtml_legend=1 00:23:36.533 --rc geninfo_all_blocks=1 00:23:36.533 --rc geninfo_unexecuted_blocks=1 00:23:36.533 00:23:36.533 ' 00:23:36.533 11:37:42 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:36.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.533 --rc genhtml_branch_coverage=1 00:23:36.533 --rc genhtml_function_coverage=1 00:23:36.533 --rc genhtml_legend=1 00:23:36.533 --rc geninfo_all_blocks=1 00:23:36.533 --rc geninfo_unexecuted_blocks=1 00:23:36.533 00:23:36.533 ' 00:23:36.533 11:37:42 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:36.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.533 --rc genhtml_branch_coverage=1 00:23:36.533 --rc genhtml_function_coverage=1 00:23:36.533 --rc genhtml_legend=1 00:23:36.533 --rc geninfo_all_blocks=1 00:23:36.533 --rc geninfo_unexecuted_blocks=1 00:23:36.533 00:23:36.533 ' 00:23:36.533 11:37:42 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:23:36.533 11:37:42 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:23:36.533 11:37:42 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:23:36.533 11:37:42 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:23:36.533 11:37:42 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:23:36.533 11:37:42 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:23:36.533 11:37:42 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:23:36.533 11:37:42 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:23:36.533 11:37:42 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:23:36.533 11:37:42 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:23:36.533 11:37:42 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:23:36.533 11:37:42 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:23:36.533 11:37:42 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:23:36.533 11:37:42 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:23:36.533 11:37:42 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:23:36.533 11:37:42 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:23:36.533 11:37:42 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:23:36.533 11:37:42 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:23:36.533 11:37:42 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:23:36.533 11:37:42 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:23:36.533 11:37:42 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:23:36.533 11:37:42 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:23:36.533 11:37:42 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:23:36.533 11:37:42 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:23:36.533 11:37:42 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:23:36.533 11:37:42 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:23:36.533 11:37:42 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:23:36.533 11:37:42 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:23:36.534 11:37:42 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:23:36.534 11:37:42 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:23:36.534 11:37:42 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:23:36.534 11:37:42 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:23:36.534 11:37:42 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:23:36.534 11:37:42 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:23:36.534 11:37:42 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:23:36.534 11:37:42 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:23:36.534 11:37:42 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:23:36.534 11:37:42 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:23:36.534 11:37:42 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:23:36.534 11:37:42 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:23:36.534 11:37:42 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:23:36.534 11:37:42 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:23:36.534 11:37:42 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:23:36.534 11:37:42 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:23:36.534 11:37:42 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:23:36.534 #define SPDK_CONFIG_H 00:23:36.534 #define SPDK_CONFIG_AIO_FSDEV 1 00:23:36.534 #define SPDK_CONFIG_APPS 1 00:23:36.534 #define SPDK_CONFIG_ARCH native 00:23:36.534 #define SPDK_CONFIG_ASAN 1 00:23:36.534 #undef SPDK_CONFIG_AVAHI 00:23:36.534 #undef SPDK_CONFIG_CET 00:23:36.534 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:23:36.534 #define SPDK_CONFIG_COVERAGE 1 00:23:36.534 #define SPDK_CONFIG_CROSS_PREFIX 00:23:36.534 #undef SPDK_CONFIG_CRYPTO 00:23:36.534 #undef SPDK_CONFIG_CRYPTO_MLX5 00:23:36.534 #undef SPDK_CONFIG_CUSTOMOCF 00:23:36.534 #undef SPDK_CONFIG_DAOS 00:23:36.534 #define SPDK_CONFIG_DAOS_DIR 00:23:36.534 #define SPDK_CONFIG_DEBUG 1 00:23:36.534 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:23:36.534 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:23:36.534 #define SPDK_CONFIG_DPDK_INC_DIR 00:23:36.534 #define SPDK_CONFIG_DPDK_LIB_DIR 00:23:36.534 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:23:36.534 #undef SPDK_CONFIG_DPDK_UADK 00:23:36.534 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:23:36.534 #define SPDK_CONFIG_EXAMPLES 1 00:23:36.534 #undef SPDK_CONFIG_FC 00:23:36.534 #define SPDK_CONFIG_FC_PATH 00:23:36.534 #define SPDK_CONFIG_FIO_PLUGIN 1 00:23:36.534 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:23:36.534 #define SPDK_CONFIG_FSDEV 1 00:23:36.534 #undef SPDK_CONFIG_FUSE 00:23:36.534 #undef SPDK_CONFIG_FUZZER 00:23:36.534 #define SPDK_CONFIG_FUZZER_LIB 00:23:36.534 #undef SPDK_CONFIG_GOLANG 00:23:36.534 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:23:36.534 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:23:36.534 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:23:36.534 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:23:36.534 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:23:36.534 #undef SPDK_CONFIG_HAVE_LIBBSD 00:23:36.534 #undef SPDK_CONFIG_HAVE_LZ4 00:23:36.534 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:23:36.534 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:23:36.534 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:23:36.534 #define SPDK_CONFIG_IDXD 1 00:23:36.534 #define SPDK_CONFIG_IDXD_KERNEL 1 00:23:36.534 #undef SPDK_CONFIG_IPSEC_MB 00:23:36.534 #define SPDK_CONFIG_IPSEC_MB_DIR 00:23:36.534 #define SPDK_CONFIG_ISAL 1 00:23:36.534 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:23:36.534 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:23:36.534 #define SPDK_CONFIG_LIBDIR 00:23:36.534 #undef SPDK_CONFIG_LTO 00:23:36.534 #define SPDK_CONFIG_MAX_LCORES 128 00:23:36.534 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:23:36.534 #define SPDK_CONFIG_NVME_CUSE 1 00:23:36.534 #undef SPDK_CONFIG_OCF 00:23:36.534 #define SPDK_CONFIG_OCF_PATH 00:23:36.534 #define SPDK_CONFIG_OPENSSL_PATH 00:23:36.534 #undef SPDK_CONFIG_PGO_CAPTURE 00:23:36.534 #define SPDK_CONFIG_PGO_DIR 00:23:36.534 #undef SPDK_CONFIG_PGO_USE 00:23:36.534 #define SPDK_CONFIG_PREFIX /usr/local 00:23:36.534 #undef SPDK_CONFIG_RAID5F 00:23:36.534 #undef SPDK_CONFIG_RBD 00:23:36.534 #define SPDK_CONFIG_RDMA 1 00:23:36.534 #define SPDK_CONFIG_RDMA_PROV verbs 00:23:36.534 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:23:36.534 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:23:36.534 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:23:36.534 #define SPDK_CONFIG_SHARED 1 00:23:36.534 #undef SPDK_CONFIG_SMA 00:23:36.534 #define SPDK_CONFIG_TESTS 1 00:23:36.535 #undef SPDK_CONFIG_TSAN 00:23:36.535 #define SPDK_CONFIG_UBLK 1 00:23:36.535 #define SPDK_CONFIG_UBSAN 1 00:23:36.535 #undef SPDK_CONFIG_UNIT_TESTS 00:23:36.535 #undef SPDK_CONFIG_URING 00:23:36.535 #define SPDK_CONFIG_URING_PATH 00:23:36.535 #undef SPDK_CONFIG_URING_ZNS 00:23:36.535 #undef SPDK_CONFIG_USDT 00:23:36.535 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:23:36.535 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:23:36.535 #undef SPDK_CONFIG_VFIO_USER 00:23:36.535 #define SPDK_CONFIG_VFIO_USER_DIR 00:23:36.535 #define SPDK_CONFIG_VHOST 1 00:23:36.535 #define SPDK_CONFIG_VIRTIO 1 00:23:36.535 #undef SPDK_CONFIG_VTUNE 00:23:36.535 #define SPDK_CONFIG_VTUNE_DIR 00:23:36.535 #define SPDK_CONFIG_WERROR 1 00:23:36.535 #define SPDK_CONFIG_WPDK_DIR 00:23:36.535 #define SPDK_CONFIG_XNVME 1 00:23:36.535 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:23:36.535 11:37:42 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:36.535 11:37:42 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:23:36.535 11:37:42 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:36.535 11:37:42 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:36.535 11:37:42 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:36.535 11:37:42 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.535 11:37:42 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.535 11:37:42 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.535 11:37:42 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:23:36.535 11:37:42 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:23:36.535 11:37:42 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:23:36.535 11:37:42 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:23:36.535 11:37:42 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:23:36.535 11:37:42 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:23:36.535 11:37:42 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:23:36.535 11:37:42 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:23:36.535 11:37:42 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:23:36.535 11:37:42 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:23:36.535 11:37:42 nvme_xnvme -- pm/common@68 -- # uname -s 00:23:36.535 11:37:42 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:23:36.535 11:37:42 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:23:36.535 11:37:42 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:23:36.535 11:37:42 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:23:36.535 11:37:42 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:23:36.535 11:37:42 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:23:36.535 11:37:42 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:23:36.535 11:37:42 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:23:36.535 11:37:42 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:23:36.535 11:37:42 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:23:36.535 11:37:42 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:23:36.535 11:37:42 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:23:36.535 11:37:42 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:23:36.535 11:37:42 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:23:36.535 11:37:42 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 70222 ]] 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 70222 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:23:36.536 11:37:42 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.Ctc5ak 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.Ctc5ak/tests/xnvme /tmp/spdk.Ctc5ak 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13976227840 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5591658496 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261665792 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13976227840 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5591658496 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266281984 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253273600 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253285888 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=94879666176 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4823113728 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:23:36.537 * Looking for test storage... 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13976227840 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:23:36.537 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:23:36.537 11:37:42 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:36.840 11:37:42 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:36.840 11:37:42 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:36.840 11:37:42 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:36.840 11:37:42 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:36.840 11:37:42 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:23:36.840 11:37:42 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:23:36.840 11:37:42 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:23:36.840 11:37:42 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:23:36.840 11:37:42 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:23:36.840 11:37:42 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:23:36.840 11:37:42 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:23:36.840 11:37:42 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:36.840 11:37:42 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:23:36.840 11:37:42 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:23:36.840 11:37:42 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:36.840 11:37:42 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:36.840 11:37:42 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:23:36.840 11:37:42 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:23:36.840 11:37:42 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:36.840 11:37:42 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:23:36.841 11:37:42 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:23:36.841 11:37:42 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:23:36.841 11:37:42 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:23:36.841 11:37:42 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:36.841 11:37:42 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:23:36.841 11:37:42 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:23:36.841 11:37:42 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:36.841 11:37:42 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:36.841 11:37:42 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:23:36.841 11:37:42 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:36.841 11:37:42 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:36.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.841 --rc genhtml_branch_coverage=1 00:23:36.841 --rc genhtml_function_coverage=1 00:23:36.841 --rc genhtml_legend=1 00:23:36.841 --rc geninfo_all_blocks=1 00:23:36.841 --rc geninfo_unexecuted_blocks=1 00:23:36.841 00:23:36.841 ' 00:23:36.841 11:37:42 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:36.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.841 --rc genhtml_branch_coverage=1 00:23:36.841 --rc genhtml_function_coverage=1 00:23:36.841 --rc genhtml_legend=1 00:23:36.841 --rc geninfo_all_blocks=1 00:23:36.841 --rc geninfo_unexecuted_blocks=1 00:23:36.841 00:23:36.841 ' 00:23:36.841 11:37:42 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:36.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.841 --rc genhtml_branch_coverage=1 00:23:36.841 --rc genhtml_function_coverage=1 00:23:36.841 --rc genhtml_legend=1 00:23:36.841 --rc geninfo_all_blocks=1 00:23:36.841 --rc geninfo_unexecuted_blocks=1 00:23:36.841 00:23:36.841 ' 00:23:36.841 11:37:42 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:36.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.841 --rc genhtml_branch_coverage=1 00:23:36.841 --rc genhtml_function_coverage=1 00:23:36.841 --rc genhtml_legend=1 00:23:36.841 --rc geninfo_all_blocks=1 00:23:36.841 --rc geninfo_unexecuted_blocks=1 00:23:36.841 00:23:36.841 ' 00:23:36.841 11:37:42 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:36.841 11:37:42 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:23:36.841 11:37:42 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:36.841 11:37:42 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:36.841 11:37:42 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:36.841 11:37:42 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.841 11:37:42 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.841 11:37:42 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.841 11:37:42 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:23:36.841 11:37:42 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.841 11:37:42 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:23:36.841 11:37:42 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:23:36.841 11:37:42 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:23:36.841 11:37:42 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:23:36.841 11:37:42 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:23:36.841 11:37:42 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:23:36.841 11:37:42 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:23:36.841 11:37:42 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:23:36.841 11:37:42 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:23:36.841 11:37:42 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:23:36.841 11:37:42 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:23:36.841 11:37:42 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:23:36.841 11:37:42 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:23:36.841 11:37:42 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:23:36.841 11:37:42 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:23:36.841 11:37:42 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:23:36.841 11:37:42 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:23:36.841 11:37:42 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:23:36.841 11:37:42 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:23:36.841 11:37:42 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:23:36.841 11:37:42 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:23:36.841 11:37:42 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:37.099 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:37.358 Waiting for block devices as requested 00:23:37.358 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:37.358 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:37.616 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:23:37.616 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:23:42.885 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:23:42.885 11:37:48 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:23:43.143 11:37:48 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:23:43.143 11:37:48 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:23:43.143 11:37:48 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:23:43.143 11:37:48 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:23:43.143 11:37:48 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:23:43.143 11:37:48 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:23:43.143 11:37:48 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:23:43.401 No valid GPT data, bailing 00:23:43.401 11:37:48 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:43.401 11:37:48 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:23:43.401 11:37:48 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:23:43.401 11:37:48 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:23:43.401 11:37:48 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:23:43.401 11:37:48 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:23:43.401 11:37:48 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:23:43.401 11:37:48 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:23:43.401 11:37:48 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:23:43.401 11:37:48 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:23:43.401 11:37:48 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:23:43.401 11:37:48 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:23:43.401 11:37:48 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:23:43.401 11:37:48 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:23:43.401 11:37:48 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:23:43.401 11:37:48 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:23:43.401 11:37:48 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:23:43.401 11:37:48 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:43.401 11:37:48 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:43.401 11:37:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:43.401 ************************************ 00:23:43.401 START TEST xnvme_rpc 00:23:43.401 ************************************ 00:23:43.401 11:37:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:23:43.401 11:37:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:23:43.401 11:37:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:23:43.401 11:37:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:23:43.401 11:37:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:23:43.401 11:37:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70611 00:23:43.401 11:37:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70611 00:23:43.401 11:37:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:43.401 11:37:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70611 ']' 00:23:43.401 11:37:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.401 11:37:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:43.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.401 11:37:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.401 11:37:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:43.401 11:37:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:43.401 [2024-11-20 11:37:49.149756] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:23:43.401 [2024-11-20 11:37:49.149929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70611 ] 00:23:43.658 [2024-11-20 11:37:49.338842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.916 [2024-11-20 11:37:49.493078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.852 11:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:44.852 11:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:23:44.852 11:37:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:23:44.852 11:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.852 11:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:44.852 xnvme_bdev 00:23:44.852 11:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.852 11:37:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:23:44.852 11:37:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:23:44.852 11:37:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:23:44.852 11:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70611 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70611 ']' 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70611 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70611 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:44.853 killing process with pid 70611 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70611' 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70611 00:23:44.853 11:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70611 00:23:47.386 00:23:47.386 real 0m3.760s 00:23:47.386 user 0m3.946s 00:23:47.386 sys 0m0.573s 00:23:47.386 11:37:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:47.386 11:37:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:47.386 ************************************ 00:23:47.386 END TEST xnvme_rpc 00:23:47.386 ************************************ 00:23:47.386 11:37:52 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:23:47.386 11:37:52 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:47.386 11:37:52 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:47.386 11:37:52 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:47.386 ************************************ 00:23:47.386 START TEST xnvme_bdevperf 00:23:47.386 ************************************ 00:23:47.386 11:37:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:23:47.386 11:37:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:23:47.386 11:37:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:23:47.386 11:37:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:23:47.386 11:37:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:23:47.386 11:37:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:23:47.386 11:37:52 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:23:47.386 11:37:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:47.386 { 00:23:47.386 "subsystems": [ 00:23:47.386 { 00:23:47.386 "subsystem": "bdev", 00:23:47.386 "config": [ 00:23:47.386 { 00:23:47.386 "params": { 00:23:47.386 "io_mechanism": "libaio", 00:23:47.386 "conserve_cpu": false, 00:23:47.386 "filename": "/dev/nvme0n1", 00:23:47.386 "name": "xnvme_bdev" 00:23:47.386 }, 00:23:47.386 "method": "bdev_xnvme_create" 00:23:47.386 }, 00:23:47.386 { 00:23:47.386 "method": "bdev_wait_for_examine" 00:23:47.386 } 00:23:47.386 ] 00:23:47.386 } 00:23:47.386 ] 00:23:47.386 } 00:23:47.386 [2024-11-20 11:37:52.882013] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:23:47.386 [2024-11-20 11:37:52.882299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70692 ] 00:23:47.386 [2024-11-20 11:37:53.058187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.644 [2024-11-20 11:37:53.187093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.903 Running I/O for 5 seconds... 00:23:50.218 28096.00 IOPS, 109.75 MiB/s [2024-11-20T11:37:56.552Z] 27838.50 IOPS, 108.74 MiB/s [2024-11-20T11:37:57.942Z] 27835.00 IOPS, 108.73 MiB/s [2024-11-20T11:37:58.878Z] 27996.00 IOPS, 109.36 MiB/s [2024-11-20T11:37:58.878Z] 28236.40 IOPS, 110.30 MiB/s 00:23:53.112 Latency(us) 00:23:53.112 [2024-11-20T11:37:58.878Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.112 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:23:53.112 xnvme_bdev : 5.01 28211.11 110.20 0.00 0.00 2263.27 417.05 5659.93 00:23:53.112 [2024-11-20T11:37:58.878Z] =================================================================================================================== 00:23:53.112 [2024-11-20T11:37:58.878Z] Total : 28211.11 110.20 0.00 0.00 2263.27 417.05 5659.93 00:23:54.048 11:37:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:23:54.048 11:37:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:23:54.048 11:37:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:23:54.048 11:37:59 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:23:54.048 11:37:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:54.048 { 00:23:54.048 "subsystems": [ 00:23:54.048 { 00:23:54.048 "subsystem": "bdev", 00:23:54.048 "config": [ 00:23:54.048 { 00:23:54.048 "params": { 00:23:54.048 "io_mechanism": "libaio", 00:23:54.048 "conserve_cpu": false, 00:23:54.048 "filename": "/dev/nvme0n1", 00:23:54.048 "name": "xnvme_bdev" 00:23:54.048 }, 00:23:54.048 "method": "bdev_xnvme_create" 00:23:54.048 }, 00:23:54.048 { 00:23:54.048 "method": "bdev_wait_for_examine" 00:23:54.048 } 00:23:54.048 ] 00:23:54.048 } 00:23:54.048 ] 00:23:54.048 } 00:23:54.048 [2024-11-20 11:37:59.681082] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:23:54.048 [2024-11-20 11:37:59.681300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70773 ] 00:23:54.307 [2024-11-20 11:37:59.868931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.307 [2024-11-20 11:37:59.993856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.874 Running I/O for 5 seconds... 00:23:56.745 27718.00 IOPS, 108.27 MiB/s [2024-11-20T11:38:03.446Z] 28093.00 IOPS, 109.74 MiB/s [2024-11-20T11:38:04.382Z] 28184.33 IOPS, 110.10 MiB/s [2024-11-20T11:38:05.758Z] 27769.75 IOPS, 108.48 MiB/s 00:23:59.992 Latency(us) 00:23:59.992 [2024-11-20T11:38:05.758Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.992 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:23:59.992 xnvme_bdev : 5.00 27285.24 106.58 0.00 0.00 2339.11 256.93 5659.93 00:23:59.992 [2024-11-20T11:38:05.758Z] =================================================================================================================== 00:23:59.992 [2024-11-20T11:38:05.758Z] Total : 27285.24 106.58 0.00 0.00 2339.11 256.93 5659.93 00:24:00.928 00:24:00.928 real 0m13.593s 00:24:00.928 user 0m5.017s 00:24:00.928 sys 0m6.193s 00:24:00.928 11:38:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:00.928 11:38:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:00.928 ************************************ 00:24:00.928 END TEST xnvme_bdevperf 00:24:00.928 ************************************ 00:24:00.928 11:38:06 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:24:00.928 11:38:06 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:00.928 11:38:06 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:00.928 11:38:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:00.928 ************************************ 00:24:00.928 START TEST xnvme_fio_plugin 00:24:00.928 ************************************ 00:24:00.928 11:38:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:24:00.928 11:38:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:24:00.928 11:38:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:24:00.928 11:38:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:24:00.928 11:38:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:24:00.928 11:38:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:24:00.928 11:38:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:24:00.928 11:38:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:00.928 11:38:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:00.928 11:38:06 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:24:00.928 11:38:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:00.928 11:38:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:00.928 11:38:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:24:00.928 11:38:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:24:00.928 11:38:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:00.928 11:38:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:00.928 11:38:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:00.928 11:38:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:00.928 11:38:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:24:00.928 11:38:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:00.928 11:38:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:00.928 11:38:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:24:00.929 11:38:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:00.929 11:38:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:24:00.929 { 00:24:00.929 "subsystems": [ 00:24:00.929 { 00:24:00.929 "subsystem": "bdev", 00:24:00.929 "config": [ 00:24:00.929 { 00:24:00.929 "params": { 00:24:00.929 "io_mechanism": "libaio", 00:24:00.929 "conserve_cpu": false, 00:24:00.929 "filename": "/dev/nvme0n1", 00:24:00.929 "name": "xnvme_bdev" 00:24:00.929 }, 00:24:00.929 "method": "bdev_xnvme_create" 00:24:00.929 }, 00:24:00.929 { 00:24:00.929 "method": "bdev_wait_for_examine" 00:24:00.929 } 00:24:00.929 ] 00:24:00.929 } 00:24:00.929 ] 00:24:00.929 } 00:24:01.187 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:24:01.187 fio-3.35 00:24:01.187 Starting 1 thread 00:24:07.752 00:24:07.752 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70892: Wed Nov 20 11:38:12 2024 00:24:07.752 read: IOPS=27.7k, BW=108MiB/s (114MB/s)(542MiB/5001msec) 00:24:07.752 slat (usec): min=5, max=2238, avg=31.70, stdev=28.09 00:24:07.752 clat (usec): min=114, max=6398, avg=1294.07, stdev=732.30 00:24:07.752 lat (usec): min=173, max=6429, avg=1325.77, stdev=735.87 00:24:07.752 clat percentiles (usec): 00:24:07.752 | 1.00th=[ 237], 5.00th=[ 338], 10.00th=[ 445], 20.00th=[ 635], 00:24:07.752 | 30.00th=[ 816], 40.00th=[ 996], 50.00th=[ 1172], 60.00th=[ 1369], 00:24:07.752 | 70.00th=[ 1598], 80.00th=[ 1893], 90.00th=[ 2311], 95.00th=[ 2671], 00:24:07.752 | 99.00th=[ 3392], 99.50th=[ 3687], 99.90th=[ 4293], 99.95th=[ 4555], 00:24:07.752 | 99.99th=[ 5145] 00:24:07.752 bw ( KiB/s): min=92176, max=123824, per=100.00%, avg=112015.44, stdev=11384.56, samples=9 00:24:07.752 iops : min=23044, max=30956, avg=28003.78, stdev=2846.28, samples=9 00:24:07.752 lat (usec) : 250=1.40%, 500=11.51%, 750=13.50%, 1000=13.76% 00:24:07.752 lat (msec) : 2=42.97%, 4=16.64%, 10=0.22% 00:24:07.752 cpu : usr=27.04%, sys=52.24%, ctx=99, majf=0, minf=764 00:24:07.752 IO depths : 1=0.1%, 2=1.2%, 4=4.8%, 8=11.9%, 16=26.3%, 32=54.0%, >=64=1.7% 00:24:07.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:07.752 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:24:07.752 issued rwts: total=138760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:07.752 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:07.752 00:24:07.752 Run status group 0 (all jobs): 00:24:07.752 READ: bw=108MiB/s (114MB/s), 108MiB/s-108MiB/s (114MB/s-114MB/s), io=542MiB (568MB), run=5001-5001msec 00:24:08.342 ----------------------------------------------------- 00:24:08.342 Suppressions used: 00:24:08.342 count bytes template 00:24:08.342 1 11 /usr/src/fio/parse.c 00:24:08.342 1 8 libtcmalloc_minimal.so 00:24:08.342 1 904 libcrypto.so 00:24:08.342 ----------------------------------------------------- 00:24:08.342 00:24:08.342 11:38:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:24:08.342 11:38:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:24:08.342 11:38:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:24:08.342 11:38:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:24:08.342 11:38:13 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:24:08.342 11:38:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:08.342 11:38:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:24:08.342 11:38:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:08.342 11:38:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:08.342 11:38:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:08.342 11:38:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:24:08.342 11:38:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:08.342 11:38:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:08.342 11:38:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:24:08.342 11:38:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:08.342 11:38:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:08.342 11:38:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:08.342 11:38:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:08.342 11:38:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:24:08.342 11:38:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:08.342 11:38:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:24:08.342 { 00:24:08.342 "subsystems": [ 00:24:08.342 { 00:24:08.342 "subsystem": "bdev", 00:24:08.342 "config": [ 00:24:08.342 { 00:24:08.342 "params": { 00:24:08.342 "io_mechanism": "libaio", 00:24:08.342 "conserve_cpu": false, 00:24:08.342 "filename": "/dev/nvme0n1", 00:24:08.342 "name": "xnvme_bdev" 00:24:08.342 }, 00:24:08.342 "method": "bdev_xnvme_create" 00:24:08.342 }, 00:24:08.342 { 00:24:08.342 "method": "bdev_wait_for_examine" 00:24:08.342 } 00:24:08.342 ] 00:24:08.342 } 00:24:08.342 ] 00:24:08.342 } 00:24:08.342 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:24:08.342 fio-3.35 00:24:08.342 Starting 1 thread 00:24:14.907 00:24:14.907 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70990: Wed Nov 20 11:38:19 2024 00:24:14.907 write: IOPS=28.8k, BW=113MiB/s (118MB/s)(563MiB/5001msec); 0 zone resets 00:24:14.907 slat (usec): min=4, max=5141, avg=30.19, stdev=35.13 00:24:14.907 clat (usec): min=9, max=8754, avg=1293.89, stdev=837.97 00:24:14.907 lat (usec): min=71, max=8813, avg=1324.09, stdev=838.96 00:24:14.907 clat percentiles (usec): 00:24:14.907 | 1.00th=[ 227], 5.00th=[ 334], 10.00th=[ 429], 20.00th=[ 611], 00:24:14.907 | 30.00th=[ 783], 40.00th=[ 963], 50.00th=[ 1139], 60.00th=[ 1319], 00:24:14.907 | 70.00th=[ 1549], 80.00th=[ 1860], 90.00th=[ 2311], 95.00th=[ 2704], 00:24:14.907 | 99.00th=[ 4293], 99.50th=[ 5145], 99.90th=[ 6456], 99.95th=[ 7373], 00:24:14.907 | 99.99th=[ 8160] 00:24:14.907 bw ( KiB/s): min=94528, max=133568, per=98.64%, avg=113806.56, stdev=13020.91, samples=9 00:24:14.907 iops : min=23632, max=33392, avg=28451.56, stdev=3255.11, samples=9 00:24:14.907 lat (usec) : 10=0.01%, 20=0.01%, 50=0.02%, 100=0.06%, 250=1.51% 00:24:14.907 lat (usec) : 500=12.37%, 750=13.92%, 1000=14.49% 00:24:14.907 lat (msec) : 2=41.29%, 4=15.01%, 10=1.34% 00:24:14.907 cpu : usr=26.04%, sys=54.56%, ctx=121, majf=0, minf=764 00:24:14.907 IO depths : 1=0.1%, 2=1.2%, 4=4.6%, 8=11.5%, 16=25.2%, 32=55.5%, >=64=1.9% 00:24:14.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.907 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:24:14.907 issued rwts: total=0,144249,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.907 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:14.907 00:24:14.907 Run status group 0 (all jobs): 00:24:14.907 WRITE: bw=113MiB/s (118MB/s), 113MiB/s-113MiB/s (118MB/s-118MB/s), io=563MiB (591MB), run=5001-5001msec 00:24:15.475 ----------------------------------------------------- 00:24:15.475 Suppressions used: 00:24:15.475 count bytes template 00:24:15.475 1 11 /usr/src/fio/parse.c 00:24:15.475 1 8 libtcmalloc_minimal.so 00:24:15.475 1 904 libcrypto.so 00:24:15.475 ----------------------------------------------------- 00:24:15.475 00:24:15.475 00:24:15.475 real 0m14.760s 00:24:15.475 user 0m6.258s 00:24:15.475 sys 0m6.116s 00:24:15.475 11:38:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:15.475 11:38:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:24:15.475 ************************************ 00:24:15.475 END TEST xnvme_fio_plugin 00:24:15.475 ************************************ 00:24:15.475 11:38:21 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:24:15.475 11:38:21 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:24:15.475 11:38:21 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:24:15.475 11:38:21 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:24:15.475 11:38:21 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:15.475 11:38:21 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:15.475 11:38:21 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:15.734 ************************************ 00:24:15.734 START TEST xnvme_rpc 00:24:15.734 ************************************ 00:24:15.734 11:38:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:24:15.734 11:38:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:24:15.734 11:38:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:24:15.734 11:38:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:24:15.734 11:38:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:24:15.734 11:38:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71076 00:24:15.734 11:38:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71076 00:24:15.734 11:38:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71076 ']' 00:24:15.734 11:38:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:15.734 11:38:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:15.734 11:38:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:15.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:15.734 11:38:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:15.734 11:38:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:15.734 11:38:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:15.734 [2024-11-20 11:38:21.385686] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:24:15.734 [2024-11-20 11:38:21.385892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71076 ] 00:24:15.994 [2024-11-20 11:38:21.565325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.994 [2024-11-20 11:38:21.687600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.930 11:38:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:16.930 11:38:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:24:16.930 11:38:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:24:16.930 11:38:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.930 11:38:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:16.930 xnvme_bdev 00:24:16.930 11:38:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.930 11:38:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:24:16.930 11:38:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:24:16.930 11:38:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:24:16.930 11:38:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.930 11:38:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:16.930 11:38:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.930 11:38:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:24:16.930 11:38:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:24:16.930 11:38:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:24:16.930 11:38:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.930 11:38:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:24:16.930 11:38:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:16.930 11:38:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.189 11:38:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:24:17.189 11:38:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:24:17.189 11:38:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:24:17.189 11:38:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:24:17.189 11:38:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.189 11:38:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:17.189 11:38:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.189 11:38:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:24:17.189 11:38:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:24:17.189 11:38:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:24:17.189 11:38:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:24:17.189 11:38:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.189 11:38:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:17.189 11:38:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.189 11:38:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:24:17.189 11:38:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:24:17.189 11:38:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.189 11:38:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:17.189 11:38:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.189 11:38:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71076 00:24:17.189 11:38:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71076 ']' 00:24:17.189 11:38:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71076 00:24:17.189 11:38:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:24:17.189 11:38:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:17.189 11:38:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71076 00:24:17.189 11:38:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:17.189 11:38:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:17.189 11:38:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71076' 00:24:17.189 killing process with pid 71076 00:24:17.189 11:38:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71076 00:24:17.189 11:38:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71076 00:24:19.722 00:24:19.722 real 0m3.735s 00:24:19.722 user 0m3.935s 00:24:19.722 sys 0m0.600s 00:24:19.722 11:38:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:19.722 11:38:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:19.722 ************************************ 00:24:19.722 END TEST xnvme_rpc 00:24:19.722 ************************************ 00:24:19.722 11:38:25 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:24:19.722 11:38:25 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:19.722 11:38:25 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:19.722 11:38:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:19.722 ************************************ 00:24:19.722 START TEST xnvme_bdevperf 00:24:19.722 ************************************ 00:24:19.722 11:38:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:24:19.722 11:38:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:24:19.722 11:38:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:24:19.722 11:38:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:24:19.722 11:38:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:24:19.722 11:38:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:24:19.722 11:38:25 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:24:19.722 11:38:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:19.722 { 00:24:19.722 "subsystems": [ 00:24:19.722 { 00:24:19.722 "subsystem": "bdev", 00:24:19.722 "config": [ 00:24:19.722 { 00:24:19.722 "params": { 00:24:19.722 "io_mechanism": "libaio", 00:24:19.722 "conserve_cpu": true, 00:24:19.722 "filename": "/dev/nvme0n1", 00:24:19.722 "name": "xnvme_bdev" 00:24:19.722 }, 00:24:19.722 "method": "bdev_xnvme_create" 00:24:19.722 }, 00:24:19.722 { 00:24:19.722 "method": "bdev_wait_for_examine" 00:24:19.722 } 00:24:19.722 ] 00:24:19.722 } 00:24:19.722 ] 00:24:19.722 } 00:24:19.722 [2024-11-20 11:38:25.150894] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:24:19.722 [2024-11-20 11:38:25.151084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71156 ] 00:24:19.722 [2024-11-20 11:38:25.332517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.722 [2024-11-20 11:38:25.456710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:20.289 Running I/O for 5 seconds... 00:24:22.160 29525.00 IOPS, 115.33 MiB/s [2024-11-20T11:38:28.860Z] 28134.00 IOPS, 109.90 MiB/s [2024-11-20T11:38:30.240Z] 28514.33 IOPS, 111.38 MiB/s [2024-11-20T11:38:31.184Z] 28026.75 IOPS, 109.48 MiB/s 00:24:25.418 Latency(us) 00:24:25.418 [2024-11-20T11:38:31.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.418 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:24:25.418 xnvme_bdev : 5.00 27434.56 107.17 0.00 0.00 2327.26 247.62 5302.46 00:24:25.418 [2024-11-20T11:38:31.184Z] =================================================================================================================== 00:24:25.418 [2024-11-20T11:38:31.184Z] Total : 27434.56 107.17 0.00 0.00 2327.26 247.62 5302.46 00:24:26.353 11:38:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:24:26.353 11:38:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:24:26.353 11:38:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:24:26.353 11:38:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:24:26.353 11:38:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:26.353 { 00:24:26.353 "subsystems": [ 00:24:26.353 { 00:24:26.353 "subsystem": "bdev", 00:24:26.353 "config": [ 00:24:26.353 { 00:24:26.353 "params": { 00:24:26.353 "io_mechanism": "libaio", 00:24:26.353 "conserve_cpu": true, 00:24:26.353 "filename": "/dev/nvme0n1", 00:24:26.353 "name": "xnvme_bdev" 00:24:26.353 }, 00:24:26.353 "method": "bdev_xnvme_create" 00:24:26.353 }, 00:24:26.353 { 00:24:26.353 "method": "bdev_wait_for_examine" 00:24:26.353 } 00:24:26.353 ] 00:24:26.353 } 00:24:26.353 ] 00:24:26.353 } 00:24:26.353 [2024-11-20 11:38:31.907255] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:24:26.353 [2024-11-20 11:38:31.908116] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71232 ] 00:24:26.353 [2024-11-20 11:38:32.091655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.611 [2024-11-20 11:38:32.211396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.870 Running I/O for 5 seconds... 00:24:29.180 26280.00 IOPS, 102.66 MiB/s [2024-11-20T11:38:35.882Z] 25546.00 IOPS, 99.79 MiB/s [2024-11-20T11:38:36.908Z] 24844.33 IOPS, 97.05 MiB/s [2024-11-20T11:38:37.876Z] 24107.75 IOPS, 94.17 MiB/s 00:24:32.110 Latency(us) 00:24:32.110 [2024-11-20T11:38:37.876Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.110 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:24:32.110 xnvme_bdev : 5.00 23668.63 92.46 0.00 0.00 2696.92 255.07 6196.13 00:24:32.110 [2024-11-20T11:38:37.876Z] =================================================================================================================== 00:24:32.110 [2024-11-20T11:38:37.876Z] Total : 23668.63 92.46 0.00 0.00 2696.92 255.07 6196.13 00:24:33.042 ************************************ 00:24:33.042 END TEST xnvme_bdevperf 00:24:33.042 ************************************ 00:24:33.042 00:24:33.042 real 0m13.482s 00:24:33.042 user 0m4.979s 00:24:33.042 sys 0m6.027s 00:24:33.042 11:38:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:33.042 11:38:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:33.042 11:38:38 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:24:33.042 11:38:38 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:33.042 11:38:38 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:33.042 11:38:38 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:33.042 ************************************ 00:24:33.042 START TEST xnvme_fio_plugin 00:24:33.042 ************************************ 00:24:33.042 11:38:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:24:33.042 11:38:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:24:33.042 11:38:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:24:33.042 11:38:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:24:33.042 11:38:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:24:33.042 11:38:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:24:33.042 11:38:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:33.042 11:38:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:24:33.042 11:38:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:33.042 11:38:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:33.042 11:38:38 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:24:33.042 11:38:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:33.042 11:38:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:24:33.042 11:38:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:24:33.042 11:38:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:33.042 11:38:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:33.042 11:38:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:33.042 11:38:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:24:33.042 11:38:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:33.042 11:38:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:33.042 11:38:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:33.042 11:38:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:24:33.043 11:38:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:33.043 11:38:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:24:33.043 { 00:24:33.043 "subsystems": [ 00:24:33.043 { 00:24:33.043 "subsystem": "bdev", 00:24:33.043 "config": [ 00:24:33.043 { 00:24:33.043 "params": { 00:24:33.043 "io_mechanism": "libaio", 00:24:33.043 "conserve_cpu": true, 00:24:33.043 "filename": "/dev/nvme0n1", 00:24:33.043 "name": "xnvme_bdev" 00:24:33.043 }, 00:24:33.043 "method": "bdev_xnvme_create" 00:24:33.043 }, 00:24:33.043 { 00:24:33.043 "method": "bdev_wait_for_examine" 00:24:33.043 } 00:24:33.043 ] 00:24:33.043 } 00:24:33.043 ] 00:24:33.043 } 00:24:33.300 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:24:33.300 fio-3.35 00:24:33.300 Starting 1 thread 00:24:39.864 00:24:39.864 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71356: Wed Nov 20 11:38:44 2024 00:24:39.864 read: IOPS=24.4k, BW=95.4MiB/s (100MB/s)(477MiB/5001msec) 00:24:39.864 slat (usec): min=4, max=1699, avg=36.64, stdev=27.29 00:24:39.864 clat (usec): min=114, max=5367, avg=1429.07, stdev=762.20 00:24:39.864 lat (usec): min=178, max=5433, avg=1465.71, stdev=763.89 00:24:39.864 clat percentiles (usec): 00:24:39.864 | 1.00th=[ 241], 5.00th=[ 347], 10.00th=[ 465], 20.00th=[ 693], 00:24:39.864 | 30.00th=[ 914], 40.00th=[ 1139], 50.00th=[ 1369], 60.00th=[ 1598], 00:24:39.864 | 70.00th=[ 1844], 80.00th=[ 2114], 90.00th=[ 2474], 95.00th=[ 2704], 00:24:39.864 | 99.00th=[ 3326], 99.50th=[ 3687], 99.90th=[ 4424], 99.95th=[ 4621], 00:24:39.864 | 99.99th=[ 4948] 00:24:39.864 bw ( KiB/s): min=88864, max=105464, per=99.60%, avg=97287.11, stdev=5757.23, samples=9 00:24:39.864 iops : min=22216, max=26366, avg=24321.78, stdev=1439.31, samples=9 00:24:39.864 lat (usec) : 250=1.29%, 500=10.34%, 750=11.00%, 1000=11.26% 00:24:39.864 lat (msec) : 2=41.89%, 4=23.96%, 10=0.27% 00:24:39.864 cpu : usr=23.34%, sys=54.62%, ctx=65, majf=0, minf=764 00:24:39.864 IO depths : 1=0.1%, 2=1.7%, 4=5.7%, 8=12.6%, 16=25.9%, 32=52.4%, >=64=1.6% 00:24:39.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:39.864 complete : 0=0.0%, 4=98.4%, 8=0.0%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:24:39.864 issued rwts: total=122118,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:39.864 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:39.864 00:24:39.864 Run status group 0 (all jobs): 00:24:39.864 READ: bw=95.4MiB/s (100MB/s), 95.4MiB/s-95.4MiB/s (100MB/s-100MB/s), io=477MiB (500MB), run=5001-5001msec 00:24:40.483 ----------------------------------------------------- 00:24:40.483 Suppressions used: 00:24:40.483 count bytes template 00:24:40.483 1 11 /usr/src/fio/parse.c 00:24:40.483 1 8 libtcmalloc_minimal.so 00:24:40.483 1 904 libcrypto.so 00:24:40.483 ----------------------------------------------------- 00:24:40.483 00:24:40.483 11:38:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:24:40.483 11:38:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:24:40.483 11:38:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:24:40.483 11:38:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:40.483 11:38:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:40.483 11:38:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:40.483 11:38:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:40.483 11:38:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:24:40.483 11:38:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:40.483 11:38:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:40.483 11:38:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:24:40.483 11:38:46 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:24:40.483 11:38:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:24:40.483 11:38:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:40.483 11:38:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:40.483 11:38:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:24:40.483 11:38:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:40.483 11:38:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:40.483 11:38:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:24:40.483 11:38:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:40.483 11:38:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:24:40.483 { 00:24:40.484 "subsystems": [ 00:24:40.484 { 00:24:40.484 "subsystem": "bdev", 00:24:40.484 "config": [ 00:24:40.484 { 00:24:40.484 "params": { 00:24:40.484 "io_mechanism": "libaio", 00:24:40.484 "conserve_cpu": true, 00:24:40.484 "filename": "/dev/nvme0n1", 00:24:40.484 "name": "xnvme_bdev" 00:24:40.484 }, 00:24:40.484 "method": "bdev_xnvme_create" 00:24:40.484 }, 00:24:40.484 { 00:24:40.484 "method": "bdev_wait_for_examine" 00:24:40.484 } 00:24:40.484 ] 00:24:40.484 } 00:24:40.484 ] 00:24:40.484 } 00:24:40.742 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:24:40.742 fio-3.35 00:24:40.742 Starting 1 thread 00:24:47.308 00:24:47.308 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71449: Wed Nov 20 11:38:52 2024 00:24:47.308 write: IOPS=23.8k, BW=93.1MiB/s (97.6MB/s)(466MiB/5001msec); 0 zone resets 00:24:47.308 slat (usec): min=4, max=1601, avg=37.43, stdev=29.57 00:24:47.308 clat (usec): min=61, max=8396, avg=1480.33, stdev=803.49 00:24:47.308 lat (usec): min=134, max=8420, avg=1517.76, stdev=805.69 00:24:47.308 clat percentiles (usec): 00:24:47.308 | 1.00th=[ 258], 5.00th=[ 371], 10.00th=[ 494], 20.00th=[ 725], 00:24:47.308 | 30.00th=[ 947], 40.00th=[ 1156], 50.00th=[ 1385], 60.00th=[ 1631], 00:24:47.308 | 70.00th=[ 1909], 80.00th=[ 2180], 90.00th=[ 2540], 95.00th=[ 2802], 00:24:47.308 | 99.00th=[ 3687], 99.50th=[ 4080], 99.90th=[ 4817], 99.95th=[ 5735], 00:24:47.308 | 99.99th=[ 8160] 00:24:47.308 bw ( KiB/s): min=89080, max=111456, per=99.39%, avg=94742.67, stdev=6663.73, samples=9 00:24:47.308 iops : min=22270, max=27864, avg=23685.67, stdev=1665.93, samples=9 00:24:47.308 lat (usec) : 100=0.01%, 250=0.83%, 500=9.46%, 750=10.90%, 1000=11.29% 00:24:47.308 lat (msec) : 2=41.03%, 4=25.90%, 10=0.57% 00:24:47.308 cpu : usr=24.86%, sys=53.48%, ctx=128, majf=0, minf=764 00:24:47.308 IO depths : 1=0.1%, 2=1.6%, 4=5.6%, 8=12.4%, 16=25.8%, 32=52.8%, >=64=1.7% 00:24:47.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.308 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:24:47.308 issued rwts: total=0,119178,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:47.308 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:47.308 00:24:47.308 Run status group 0 (all jobs): 00:24:47.308 WRITE: bw=93.1MiB/s (97.6MB/s), 93.1MiB/s-93.1MiB/s (97.6MB/s-97.6MB/s), io=466MiB (488MB), run=5001-5001msec 00:24:47.876 ----------------------------------------------------- 00:24:47.876 Suppressions used: 00:24:47.876 count bytes template 00:24:47.876 1 11 /usr/src/fio/parse.c 00:24:47.876 1 8 libtcmalloc_minimal.so 00:24:47.876 1 904 libcrypto.so 00:24:47.876 ----------------------------------------------------- 00:24:47.876 00:24:47.876 00:24:47.876 real 0m14.916s 00:24:47.876 user 0m6.158s 00:24:47.876 sys 0m6.222s 00:24:47.876 ************************************ 00:24:47.876 END TEST xnvme_fio_plugin 00:24:47.876 ************************************ 00:24:47.876 11:38:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:47.876 11:38:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:24:47.876 11:38:53 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:24:47.876 11:38:53 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:24:47.876 11:38:53 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:24:47.876 11:38:53 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:24:47.876 11:38:53 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:24:47.876 11:38:53 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:24:47.876 11:38:53 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:24:47.876 11:38:53 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:24:47.876 11:38:53 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:24:47.876 11:38:53 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:47.876 11:38:53 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:47.876 11:38:53 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:47.876 ************************************ 00:24:47.876 START TEST xnvme_rpc 00:24:47.876 ************************************ 00:24:47.876 11:38:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:24:47.876 11:38:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:24:47.876 11:38:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:24:47.876 11:38:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:24:47.876 11:38:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:24:47.876 11:38:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71541 00:24:47.876 11:38:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71541 00:24:47.876 11:38:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:47.876 11:38:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71541 ']' 00:24:47.876 11:38:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:47.876 11:38:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:47.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:47.876 11:38:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:47.876 11:38:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:47.876 11:38:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:48.135 [2024-11-20 11:38:53.674285] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:24:48.135 [2024-11-20 11:38:53.674529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71541 ] 00:24:48.135 [2024-11-20 11:38:53.874557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.394 [2024-11-20 11:38:54.025441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:49.331 11:38:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:49.331 11:38:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:24:49.331 11:38:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:24:49.331 11:38:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.331 11:38:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:49.331 xnvme_bdev 00:24:49.331 11:38:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.331 11:38:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:24:49.331 11:38:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:24:49.331 11:38:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.331 11:38:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:49.331 11:38:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:24:49.331 11:38:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.331 11:38:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:24:49.331 11:38:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:24:49.331 11:38:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:24:49.331 11:38:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:24:49.331 11:38:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.331 11:38:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:49.331 11:38:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.331 11:38:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:24:49.331 11:38:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:24:49.331 11:38:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:24:49.331 11:38:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.331 11:38:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:49.331 11:38:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:24:49.331 11:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.331 11:38:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:24:49.331 11:38:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:24:49.331 11:38:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:24:49.331 11:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.331 11:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:49.331 11:38:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:24:49.331 11:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.590 11:38:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:24:49.590 11:38:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:24:49.590 11:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.590 11:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:49.590 11:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.590 11:38:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71541 00:24:49.590 11:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71541 ']' 00:24:49.590 11:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71541 00:24:49.590 11:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:24:49.590 11:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:49.590 11:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71541 00:24:49.590 killing process with pid 71541 00:24:49.590 11:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:49.590 11:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:49.590 11:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71541' 00:24:49.590 11:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71541 00:24:49.590 11:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71541 00:24:52.122 ************************************ 00:24:52.122 END TEST xnvme_rpc 00:24:52.122 ************************************ 00:24:52.122 00:24:52.122 real 0m3.816s 00:24:52.122 user 0m3.993s 00:24:52.122 sys 0m0.572s 00:24:52.122 11:38:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:52.122 11:38:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:52.122 11:38:57 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:24:52.122 11:38:57 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:52.122 11:38:57 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:52.122 11:38:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:52.122 ************************************ 00:24:52.122 START TEST xnvme_bdevperf 00:24:52.122 ************************************ 00:24:52.122 11:38:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:24:52.122 11:38:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:24:52.122 11:38:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:24:52.122 11:38:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:24:52.122 11:38:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:24:52.122 11:38:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:24:52.122 11:38:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:24:52.122 11:38:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:52.122 { 00:24:52.122 "subsystems": [ 00:24:52.122 { 00:24:52.122 "subsystem": "bdev", 00:24:52.122 "config": [ 00:24:52.122 { 00:24:52.122 "params": { 00:24:52.122 "io_mechanism": "io_uring", 00:24:52.122 "conserve_cpu": false, 00:24:52.122 "filename": "/dev/nvme0n1", 00:24:52.122 "name": "xnvme_bdev" 00:24:52.122 }, 00:24:52.122 "method": "bdev_xnvme_create" 00:24:52.122 }, 00:24:52.122 { 00:24:52.122 "method": "bdev_wait_for_examine" 00:24:52.122 } 00:24:52.122 ] 00:24:52.122 } 00:24:52.122 ] 00:24:52.122 } 00:24:52.122 [2024-11-20 11:38:57.507443] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:24:52.122 [2024-11-20 11:38:57.507617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71615 ] 00:24:52.122 [2024-11-20 11:38:57.683814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.122 [2024-11-20 11:38:57.814875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:52.691 Running I/O for 5 seconds... 00:24:54.563 51207.00 IOPS, 200.03 MiB/s [2024-11-20T11:39:01.264Z] 49920.00 IOPS, 195.00 MiB/s [2024-11-20T11:39:02.201Z] 48653.00 IOPS, 190.05 MiB/s [2024-11-20T11:39:03.577Z] 47931.25 IOPS, 187.23 MiB/s [2024-11-20T11:39:03.577Z] 47928.20 IOPS, 187.22 MiB/s 00:24:57.811 Latency(us) 00:24:57.811 [2024-11-20T11:39:03.577Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.811 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:24:57.811 xnvme_bdev : 5.01 47890.20 187.07 0.00 0.00 1332.20 439.39 9055.88 00:24:57.811 [2024-11-20T11:39:03.577Z] =================================================================================================================== 00:24:57.811 [2024-11-20T11:39:03.577Z] Total : 47890.20 187.07 0.00 0.00 1332.20 439.39 9055.88 00:24:58.747 11:39:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:24:58.747 11:39:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:24:58.747 11:39:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:24:58.747 11:39:04 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:24:58.747 11:39:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:58.747 { 00:24:58.747 "subsystems": [ 00:24:58.747 { 00:24:58.747 "subsystem": "bdev", 00:24:58.747 "config": [ 00:24:58.747 { 00:24:58.747 "params": { 00:24:58.747 "io_mechanism": "io_uring", 00:24:58.747 "conserve_cpu": false, 00:24:58.747 "filename": "/dev/nvme0n1", 00:24:58.747 "name": "xnvme_bdev" 00:24:58.747 }, 00:24:58.747 "method": "bdev_xnvme_create" 00:24:58.747 }, 00:24:58.747 { 00:24:58.747 "method": "bdev_wait_for_examine" 00:24:58.747 } 00:24:58.747 ] 00:24:58.747 } 00:24:58.747 ] 00:24:58.747 } 00:24:58.747 [2024-11-20 11:39:04.353454] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:24:58.748 [2024-11-20 11:39:04.353939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71697 ] 00:24:59.007 [2024-11-20 11:39:04.539106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.007 [2024-11-20 11:39:04.667713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.266 Running I/O for 5 seconds... 00:25:01.247 36606.00 IOPS, 142.99 MiB/s [2024-11-20T11:39:08.388Z] 36198.50 IOPS, 141.40 MiB/s [2024-11-20T11:39:09.322Z] 37542.67 IOPS, 146.65 MiB/s [2024-11-20T11:39:10.258Z] 37871.50 IOPS, 147.94 MiB/s [2024-11-20T11:39:10.258Z] 37139.80 IOPS, 145.08 MiB/s 00:25:04.492 Latency(us) 00:25:04.492 [2024-11-20T11:39:10.258Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.492 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:25:04.492 xnvme_bdev : 5.00 37108.24 144.95 0.00 0.00 1718.46 107.52 12630.57 00:25:04.492 [2024-11-20T11:39:10.258Z] =================================================================================================================== 00:25:04.492 [2024-11-20T11:39:10.258Z] Total : 37108.24 144.95 0.00 0.00 1718.46 107.52 12630.57 00:25:05.428 00:25:05.428 real 0m13.782s 00:25:05.428 user 0m6.899s 00:25:05.428 sys 0m6.665s 00:25:05.687 11:39:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:05.687 11:39:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:05.687 ************************************ 00:25:05.687 END TEST xnvme_bdevperf 00:25:05.687 ************************************ 00:25:05.688 11:39:11 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:25:05.688 11:39:11 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:05.688 11:39:11 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:05.688 11:39:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:25:05.688 ************************************ 00:25:05.688 START TEST xnvme_fio_plugin 00:25:05.688 ************************************ 00:25:05.688 11:39:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:25:05.688 11:39:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:25:05.688 11:39:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:25:05.688 11:39:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:25:05.688 11:39:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:25:05.688 11:39:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:25:05.688 11:39:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:25:05.688 11:39:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:05.688 11:39:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:05.688 11:39:11 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:25:05.688 11:39:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:05.688 11:39:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:05.688 11:39:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:25:05.688 11:39:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:25:05.688 11:39:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:05.688 11:39:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:05.688 11:39:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:05.688 11:39:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:25:05.688 11:39:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:05.688 11:39:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:25:05.688 11:39:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:25:05.688 11:39:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:25:05.688 11:39:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:05.688 11:39:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:25:05.688 { 00:25:05.688 "subsystems": [ 00:25:05.688 { 00:25:05.688 "subsystem": "bdev", 00:25:05.688 "config": [ 00:25:05.688 { 00:25:05.688 "params": { 00:25:05.688 "io_mechanism": "io_uring", 00:25:05.688 "conserve_cpu": false, 00:25:05.688 "filename": "/dev/nvme0n1", 00:25:05.688 "name": "xnvme_bdev" 00:25:05.688 }, 00:25:05.688 "method": "bdev_xnvme_create" 00:25:05.688 }, 00:25:05.688 { 00:25:05.688 "method": "bdev_wait_for_examine" 00:25:05.688 } 00:25:05.688 ] 00:25:05.688 } 00:25:05.688 ] 00:25:05.688 } 00:25:05.948 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:25:05.948 fio-3.35 00:25:05.948 Starting 1 thread 00:25:12.511 00:25:12.511 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71822: Wed Nov 20 11:39:17 2024 00:25:12.511 read: IOPS=48.0k, BW=188MiB/s (197MB/s)(938MiB/5001msec) 00:25:12.511 slat (nsec): min=2457, max=73048, avg=4137.61, stdev=2153.97 00:25:12.511 clat (usec): min=298, max=7823, avg=1167.26, stdev=188.72 00:25:12.511 lat (usec): min=302, max=7827, avg=1171.40, stdev=189.31 00:25:12.511 clat percentiles (usec): 00:25:12.511 | 1.00th=[ 873], 5.00th=[ 947], 10.00th=[ 988], 20.00th=[ 1037], 00:25:12.511 | 30.00th=[ 1074], 40.00th=[ 1106], 50.00th=[ 1139], 60.00th=[ 1188], 00:25:12.511 | 70.00th=[ 1221], 80.00th=[ 1270], 90.00th=[ 1352], 95.00th=[ 1434], 00:25:12.511 | 99.00th=[ 1795], 99.50th=[ 1942], 99.90th=[ 2900], 99.95th=[ 3458], 00:25:12.511 | 99.99th=[ 3982] 00:25:12.511 bw ( KiB/s): min=178688, max=215040, per=100.00%, avg=192301.33, stdev=10713.11, samples=9 00:25:12.511 iops : min=44672, max=53760, avg=48075.33, stdev=2678.28, samples=9 00:25:12.511 lat (usec) : 500=0.02%, 750=0.19%, 1000=12.36% 00:25:12.511 lat (msec) : 2=87.03%, 4=0.39%, 10=0.01% 00:25:12.511 cpu : usr=38.46%, sys=60.48%, ctx=9, majf=0, minf=762 00:25:12.511 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.4%, 16=25.0%, 32=50.3%, >=64=1.6% 00:25:12.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.511 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:25:12.511 issued rwts: total=240117,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.511 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:12.511 00:25:12.511 Run status group 0 (all jobs): 00:25:12.511 READ: bw=188MiB/s (197MB/s), 188MiB/s-188MiB/s (197MB/s-197MB/s), io=938MiB (984MB), run=5001-5001msec 00:25:13.079 ----------------------------------------------------- 00:25:13.079 Suppressions used: 00:25:13.079 count bytes template 00:25:13.079 1 11 /usr/src/fio/parse.c 00:25:13.079 1 8 libtcmalloc_minimal.so 00:25:13.079 1 904 libcrypto.so 00:25:13.079 ----------------------------------------------------- 00:25:13.079 00:25:13.079 11:39:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:25:13.079 11:39:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:25:13.079 11:39:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:25:13.079 11:39:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:25:13.079 11:39:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:13.079 11:39:18 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:25:13.079 11:39:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:13.079 11:39:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:25:13.079 11:39:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:13.079 11:39:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:13.079 11:39:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:25:13.079 11:39:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:13.079 11:39:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:13.079 11:39:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:13.079 11:39:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:13.079 11:39:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:25:13.339 11:39:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:25:13.339 11:39:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:25:13.339 11:39:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:25:13.339 11:39:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:13.339 11:39:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:25:13.339 { 00:25:13.339 "subsystems": [ 00:25:13.339 { 00:25:13.339 "subsystem": "bdev", 00:25:13.339 "config": [ 00:25:13.339 { 00:25:13.339 "params": { 00:25:13.339 "io_mechanism": "io_uring", 00:25:13.339 "conserve_cpu": false, 00:25:13.339 "filename": "/dev/nvme0n1", 00:25:13.339 "name": "xnvme_bdev" 00:25:13.339 }, 00:25:13.339 "method": "bdev_xnvme_create" 00:25:13.339 }, 00:25:13.339 { 00:25:13.339 "method": "bdev_wait_for_examine" 00:25:13.339 } 00:25:13.339 ] 00:25:13.339 } 00:25:13.339 ] 00:25:13.339 } 00:25:13.596 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:25:13.596 fio-3.35 00:25:13.596 Starting 1 thread 00:25:20.162 00:25:20.162 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71914: Wed Nov 20 11:39:24 2024 00:25:20.162 write: IOPS=41.5k, BW=162MiB/s (170MB/s)(811MiB/5001msec); 0 zone resets 00:25:20.162 slat (usec): min=2, max=351, avg= 4.68, stdev= 2.96 00:25:20.162 clat (usec): min=86, max=18679, avg=1369.02, stdev=823.59 00:25:20.162 lat (usec): min=91, max=18683, avg=1373.70, stdev=823.87 00:25:20.162 clat percentiles (usec): 00:25:20.162 | 1.00th=[ 383], 5.00th=[ 889], 10.00th=[ 996], 20.00th=[ 1074], 00:25:20.162 | 30.00th=[ 1139], 40.00th=[ 1188], 50.00th=[ 1221], 60.00th=[ 1270], 00:25:20.162 | 70.00th=[ 1336], 80.00th=[ 1418], 90.00th=[ 1663], 95.00th=[ 2147], 00:25:20.162 | 99.00th=[ 5342], 99.50th=[ 6783], 99.90th=[ 9896], 99.95th=[12780], 00:25:20.162 | 99.99th=[17433] 00:25:20.162 bw ( KiB/s): min=134664, max=195584, per=100.00%, avg=169577.78, stdev=21221.36, samples=9 00:25:20.162 iops : min=33666, max=48896, avg=42394.44, stdev=5305.34, samples=9 00:25:20.162 lat (usec) : 100=0.01%, 250=0.31%, 500=1.44%, 750=1.50%, 1000=7.27% 00:25:20.162 lat (msec) : 2=84.00%, 4=3.61%, 10=1.78%, 20=0.09% 00:25:20.162 cpu : usr=36.46%, sys=62.40%, ctx=18, majf=0, minf=762 00:25:20.162 IO depths : 1=1.3%, 2=2.7%, 4=5.4%, 8=11.0%, 16=23.2%, 32=54.1%, >=64=2.3% 00:25:20.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.162 complete : 0=0.0%, 4=98.1%, 8=0.2%, 16=0.2%, 32=0.1%, 64=1.5%, >=64=0.0% 00:25:20.162 issued rwts: total=0,207586,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.162 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:20.162 00:25:20.162 Run status group 0 (all jobs): 00:25:20.162 WRITE: bw=162MiB/s (170MB/s), 162MiB/s-162MiB/s (170MB/s-170MB/s), io=811MiB (850MB), run=5001-5001msec 00:25:20.730 ----------------------------------------------------- 00:25:20.730 Suppressions used: 00:25:20.730 count bytes template 00:25:20.730 1 11 /usr/src/fio/parse.c 00:25:20.730 1 8 libtcmalloc_minimal.so 00:25:20.730 1 904 libcrypto.so 00:25:20.730 ----------------------------------------------------- 00:25:20.730 00:25:20.730 00:25:20.730 real 0m15.113s 00:25:20.730 user 0m7.729s 00:25:20.730 sys 0m6.977s 00:25:20.730 11:39:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:20.730 11:39:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:25:20.730 ************************************ 00:25:20.730 END TEST xnvme_fio_plugin 00:25:20.730 ************************************ 00:25:20.730 11:39:26 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:25:20.730 11:39:26 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:25:20.730 11:39:26 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:25:20.730 11:39:26 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:25:20.730 11:39:26 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:20.730 11:39:26 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:20.730 11:39:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:25:20.730 ************************************ 00:25:20.730 START TEST xnvme_rpc 00:25:20.730 ************************************ 00:25:20.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.730 11:39:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:25:20.730 11:39:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:25:20.730 11:39:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:25:20.730 11:39:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:25:20.730 11:39:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:25:20.730 11:39:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72006 00:25:20.730 11:39:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72006 00:25:20.730 11:39:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72006 ']' 00:25:20.730 11:39:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.730 11:39:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:20.730 11:39:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.730 11:39:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:20.730 11:39:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:20.730 11:39:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:20.989 [2024-11-20 11:39:26.551838] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:25:20.989 [2024-11-20 11:39:26.552041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72006 ] 00:25:20.989 [2024-11-20 11:39:26.740358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.248 [2024-11-20 11:39:26.854049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.184 11:39:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:22.184 11:39:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:25:22.184 11:39:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:25:22.184 11:39:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.184 11:39:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:22.184 xnvme_bdev 00:25:22.184 11:39:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.184 11:39:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:25:22.184 11:39:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:25:22.184 11:39:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.184 11:39:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:25:22.184 11:39:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:22.184 11:39:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.184 11:39:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:25:22.184 11:39:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:25:22.184 11:39:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:25:22.184 11:39:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.184 11:39:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:25:22.184 11:39:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:22.184 11:39:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.184 11:39:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:25:22.184 11:39:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:25:22.184 11:39:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:25:22.184 11:39:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:25:22.184 11:39:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.184 11:39:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:22.184 11:39:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.184 11:39:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:25:22.184 11:39:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:25:22.184 11:39:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:25:22.184 11:39:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:25:22.184 11:39:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.184 11:39:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:22.184 11:39:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.442 11:39:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:25:22.442 11:39:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:25:22.442 11:39:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.442 11:39:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:22.442 11:39:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.442 11:39:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72006 00:25:22.442 11:39:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72006 ']' 00:25:22.442 11:39:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72006 00:25:22.442 11:39:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:25:22.442 11:39:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:22.442 11:39:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72006 00:25:22.442 killing process with pid 72006 00:25:22.442 11:39:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:22.442 11:39:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:22.442 11:39:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72006' 00:25:22.442 11:39:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72006 00:25:22.442 11:39:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72006 00:25:25.007 00:25:25.007 real 0m3.945s 00:25:25.008 user 0m4.059s 00:25:25.008 sys 0m0.613s 00:25:25.008 11:39:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:25.008 11:39:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:25.008 ************************************ 00:25:25.008 END TEST xnvme_rpc 00:25:25.008 ************************************ 00:25:25.008 11:39:30 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:25:25.008 11:39:30 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:25.008 11:39:30 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:25.008 11:39:30 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:25:25.008 ************************************ 00:25:25.008 START TEST xnvme_bdevperf 00:25:25.008 ************************************ 00:25:25.008 11:39:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:25:25.008 11:39:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:25:25.008 11:39:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:25:25.008 11:39:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:25:25.008 11:39:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:25:25.008 11:39:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:25:25.008 11:39:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:25:25.008 11:39:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:25.008 { 00:25:25.008 "subsystems": [ 00:25:25.008 { 00:25:25.008 "subsystem": "bdev", 00:25:25.008 "config": [ 00:25:25.008 { 00:25:25.008 "params": { 00:25:25.008 "io_mechanism": "io_uring", 00:25:25.008 "conserve_cpu": true, 00:25:25.008 "filename": "/dev/nvme0n1", 00:25:25.008 "name": "xnvme_bdev" 00:25:25.008 }, 00:25:25.008 "method": "bdev_xnvme_create" 00:25:25.008 }, 00:25:25.008 { 00:25:25.008 "method": "bdev_wait_for_examine" 00:25:25.008 } 00:25:25.008 ] 00:25:25.008 } 00:25:25.008 ] 00:25:25.008 } 00:25:25.008 [2024-11-20 11:39:30.521345] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:25:25.008 [2024-11-20 11:39:30.521545] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72087 ] 00:25:25.008 [2024-11-20 11:39:30.709589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.267 [2024-11-20 11:39:30.850530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.526 Running I/O for 5 seconds... 00:25:27.840 47936.00 IOPS, 187.25 MiB/s [2024-11-20T11:39:34.598Z] 48433.00 IOPS, 189.19 MiB/s [2024-11-20T11:39:35.548Z] 47932.67 IOPS, 187.24 MiB/s [2024-11-20T11:39:36.485Z] 49130.50 IOPS, 191.92 MiB/s 00:25:30.719 Latency(us) 00:25:30.719 [2024-11-20T11:39:36.485Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.719 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:25:30.719 xnvme_bdev : 5.00 49437.06 193.11 0.00 0.00 1290.76 461.73 9532.51 00:25:30.719 [2024-11-20T11:39:36.485Z] =================================================================================================================== 00:25:30.719 [2024-11-20T11:39:36.485Z] Total : 49437.06 193.11 0.00 0.00 1290.76 461.73 9532.51 00:25:31.656 11:39:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:25:31.656 11:39:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:25:31.656 11:39:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:25:31.656 11:39:37 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:25:31.656 11:39:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:31.914 { 00:25:31.914 "subsystems": [ 00:25:31.914 { 00:25:31.914 "subsystem": "bdev", 00:25:31.914 "config": [ 00:25:31.914 { 00:25:31.914 "params": { 00:25:31.914 "io_mechanism": "io_uring", 00:25:31.914 "conserve_cpu": true, 00:25:31.914 "filename": "/dev/nvme0n1", 00:25:31.914 "name": "xnvme_bdev" 00:25:31.914 }, 00:25:31.914 "method": "bdev_xnvme_create" 00:25:31.914 }, 00:25:31.914 { 00:25:31.914 "method": "bdev_wait_for_examine" 00:25:31.914 } 00:25:31.914 ] 00:25:31.914 } 00:25:31.914 ] 00:25:31.914 } 00:25:31.914 [2024-11-20 11:39:37.490378] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:25:31.914 [2024-11-20 11:39:37.490594] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72162 ] 00:25:31.914 [2024-11-20 11:39:37.676722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.172 [2024-11-20 11:39:37.830176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.740 Running I/O for 5 seconds... 00:25:34.612 42240.00 IOPS, 165.00 MiB/s [2024-11-20T11:39:41.385Z] 42272.00 IOPS, 165.12 MiB/s [2024-11-20T11:39:42.321Z] 41856.00 IOPS, 163.50 MiB/s [2024-11-20T11:39:43.257Z] 42160.00 IOPS, 164.69 MiB/s 00:25:37.491 Latency(us) 00:25:37.491 [2024-11-20T11:39:43.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:37.491 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:25:37.491 xnvme_bdev : 5.00 42385.41 165.57 0.00 0.00 1504.69 785.69 6881.28 00:25:37.491 [2024-11-20T11:39:43.257Z] =================================================================================================================== 00:25:37.491 [2024-11-20T11:39:43.257Z] Total : 42385.41 165.57 0.00 0.00 1504.69 785.69 6881.28 00:25:38.874 00:25:38.874 real 0m13.797s 00:25:38.874 user 0m8.630s 00:25:38.874 sys 0m4.595s 00:25:38.874 11:39:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:38.874 11:39:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:38.874 ************************************ 00:25:38.874 END TEST xnvme_bdevperf 00:25:38.874 ************************************ 00:25:38.874 11:39:44 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:25:38.874 11:39:44 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:38.874 11:39:44 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:38.874 11:39:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:25:38.874 ************************************ 00:25:38.874 START TEST xnvme_fio_plugin 00:25:38.874 ************************************ 00:25:38.874 11:39:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:25:38.874 11:39:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:25:38.874 11:39:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:25:38.874 11:39:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:25:38.874 11:39:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:25:38.874 11:39:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:25:38.874 11:39:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:25:38.874 11:39:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:38.874 11:39:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:38.874 11:39:44 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:25:38.874 11:39:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:25:38.874 11:39:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:38.874 11:39:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:38.874 11:39:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:25:38.874 11:39:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:38.874 11:39:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:38.874 11:39:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:38.874 11:39:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:25:38.874 11:39:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:38.874 11:39:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:25:38.875 11:39:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:25:38.875 11:39:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:25:38.875 11:39:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:38.875 11:39:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:25:38.875 { 00:25:38.875 "subsystems": [ 00:25:38.875 { 00:25:38.875 "subsystem": "bdev", 00:25:38.875 "config": [ 00:25:38.875 { 00:25:38.875 "params": { 00:25:38.875 "io_mechanism": "io_uring", 00:25:38.875 "conserve_cpu": true, 00:25:38.875 "filename": "/dev/nvme0n1", 00:25:38.875 "name": "xnvme_bdev" 00:25:38.875 }, 00:25:38.875 "method": "bdev_xnvme_create" 00:25:38.875 }, 00:25:38.875 { 00:25:38.875 "method": "bdev_wait_for_examine" 00:25:38.875 } 00:25:38.875 ] 00:25:38.875 } 00:25:38.875 ] 00:25:38.875 } 00:25:38.875 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:25:38.875 fio-3.35 00:25:38.875 Starting 1 thread 00:25:45.476 00:25:45.476 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72287: Wed Nov 20 11:39:50 2024 00:25:45.476 read: IOPS=49.5k, BW=193MiB/s (203MB/s)(966MiB/5001msec) 00:25:45.476 slat (usec): min=2, max=103, avg= 3.81, stdev= 2.26 00:25:45.476 clat (usec): min=281, max=2635, avg=1140.63, stdev=135.83 00:25:45.476 lat (usec): min=284, max=2699, avg=1144.44, stdev=136.32 00:25:45.476 clat percentiles (usec): 00:25:45.476 | 1.00th=[ 898], 5.00th=[ 955], 10.00th=[ 988], 20.00th=[ 1029], 00:25:45.476 | 30.00th=[ 1074], 40.00th=[ 1090], 50.00th=[ 1123], 60.00th=[ 1156], 00:25:45.476 | 70.00th=[ 1188], 80.00th=[ 1237], 90.00th=[ 1303], 95.00th=[ 1369], 00:25:45.476 | 99.00th=[ 1582], 99.50th=[ 1680], 99.90th=[ 1827], 99.95th=[ 1926], 00:25:45.476 | 99.99th=[ 2442] 00:25:45.476 bw ( KiB/s): min=186880, max=207872, per=99.60%, avg=197091.56, stdev=7236.75, samples=9 00:25:45.476 iops : min=46720, max=51968, avg=49272.89, stdev=1809.19, samples=9 00:25:45.476 lat (usec) : 500=0.01%, 750=0.04%, 1000=12.75% 00:25:45.476 lat (msec) : 2=87.18%, 4=0.03% 00:25:45.476 cpu : usr=49.02%, sys=46.52%, ctx=16, majf=0, minf=762 00:25:45.476 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.4%, 16=24.9%, 32=50.2%, >=64=1.6% 00:25:45.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:45.476 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:25:45.476 issued rwts: total=247407,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:45.476 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:45.476 00:25:45.476 Run status group 0 (all jobs): 00:25:45.476 READ: bw=193MiB/s (203MB/s), 193MiB/s-193MiB/s (203MB/s-203MB/s), io=966MiB (1013MB), run=5001-5001msec 00:25:45.735 ----------------------------------------------------- 00:25:45.735 Suppressions used: 00:25:45.735 count bytes template 00:25:45.735 1 11 /usr/src/fio/parse.c 00:25:45.735 1 8 libtcmalloc_minimal.so 00:25:45.735 1 904 libcrypto.so 00:25:45.735 ----------------------------------------------------- 00:25:45.735 00:25:45.993 11:39:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:25:45.993 11:39:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:25:45.993 11:39:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:25:45.993 11:39:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:25:45.993 11:39:51 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:25:45.993 11:39:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:45.993 11:39:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:25:45.993 11:39:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:45.993 11:39:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:45.993 11:39:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:45.993 11:39:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:25:45.993 11:39:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:45.993 11:39:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:45.993 11:39:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:45.993 11:39:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:25:45.993 11:39:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:45.993 11:39:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:25:45.993 11:39:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:25:45.993 11:39:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:25:45.993 11:39:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:45.993 11:39:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:25:45.993 { 00:25:45.993 "subsystems": [ 00:25:45.993 { 00:25:45.993 "subsystem": "bdev", 00:25:45.993 "config": [ 00:25:45.993 { 00:25:45.993 "params": { 00:25:45.993 "io_mechanism": "io_uring", 00:25:45.993 "conserve_cpu": true, 00:25:45.993 "filename": "/dev/nvme0n1", 00:25:45.993 "name": "xnvme_bdev" 00:25:45.993 }, 00:25:45.993 "method": "bdev_xnvme_create" 00:25:45.993 }, 00:25:45.993 { 00:25:45.993 "method": "bdev_wait_for_examine" 00:25:45.993 } 00:25:45.993 ] 00:25:45.993 } 00:25:45.993 ] 00:25:45.993 } 00:25:46.251 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:25:46.251 fio-3.35 00:25:46.251 Starting 1 thread 00:25:52.814 00:25:52.814 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72379: Wed Nov 20 11:39:57 2024 00:25:52.814 write: IOPS=46.0k, BW=180MiB/s (188MB/s)(898MiB/5001msec); 0 zone resets 00:25:52.814 slat (nsec): min=2521, max=63817, avg=4541.49, stdev=2359.58 00:25:52.814 clat (usec): min=834, max=2814, avg=1208.36, stdev=162.25 00:25:52.814 lat (usec): min=837, max=2848, avg=1212.90, stdev=163.01 00:25:52.814 clat percentiles (usec): 00:25:52.814 | 1.00th=[ 938], 5.00th=[ 996], 10.00th=[ 1029], 20.00th=[ 1074], 00:25:52.814 | 30.00th=[ 1123], 40.00th=[ 1156], 50.00th=[ 1188], 60.00th=[ 1221], 00:25:52.814 | 70.00th=[ 1270], 80.00th=[ 1319], 90.00th=[ 1401], 95.00th=[ 1516], 00:25:52.814 | 99.00th=[ 1762], 99.50th=[ 1827], 99.90th=[ 1958], 99.95th=[ 2089], 00:25:52.814 | 99.99th=[ 2606] 00:25:52.814 bw ( KiB/s): min=181248, max=192512, per=100.00%, avg=186083.56, stdev=3754.67, samples=9 00:25:52.814 iops : min=45312, max=48128, avg=46520.89, stdev=938.67, samples=9 00:25:52.814 lat (usec) : 1000=5.44% 00:25:52.814 lat (msec) : 2=94.49%, 4=0.07% 00:25:52.814 cpu : usr=52.66%, sys=43.18%, ctx=8, majf=0, minf=762 00:25:52.814 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:25:52.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.814 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:25:52.814 issued rwts: total=0,229952,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.814 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:52.814 00:25:52.814 Run status group 0 (all jobs): 00:25:52.814 WRITE: bw=180MiB/s (188MB/s), 180MiB/s-180MiB/s (188MB/s-188MB/s), io=898MiB (942MB), run=5001-5001msec 00:25:53.380 ----------------------------------------------------- 00:25:53.380 Suppressions used: 00:25:53.380 count bytes template 00:25:53.380 1 11 /usr/src/fio/parse.c 00:25:53.380 1 8 libtcmalloc_minimal.so 00:25:53.380 1 904 libcrypto.so 00:25:53.380 ----------------------------------------------------- 00:25:53.380 00:25:53.380 00:25:53.380 real 0m14.623s 00:25:53.380 user 0m8.651s 00:25:53.380 sys 0m5.248s 00:25:53.380 11:39:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:53.380 ************************************ 00:25:53.380 END TEST xnvme_fio_plugin 00:25:53.380 ************************************ 00:25:53.380 11:39:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:25:53.380 11:39:58 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:25:53.380 11:39:58 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:25:53.380 11:39:58 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:25:53.380 11:39:58 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:25:53.380 11:39:58 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:25:53.380 11:39:58 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:25:53.380 11:39:58 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:25:53.380 11:39:58 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:25:53.380 11:39:58 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:25:53.380 11:39:58 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:53.380 11:39:58 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:53.380 11:39:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:25:53.380 ************************************ 00:25:53.380 START TEST xnvme_rpc 00:25:53.380 ************************************ 00:25:53.380 11:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:25:53.380 11:39:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:25:53.380 11:39:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:25:53.380 11:39:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:25:53.380 11:39:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:25:53.380 11:39:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72465 00:25:53.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:53.380 11:39:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72465 00:25:53.380 11:39:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:53.380 11:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72465 ']' 00:25:53.380 11:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:53.380 11:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:53.380 11:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:53.380 11:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:53.380 11:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:53.380 [2024-11-20 11:39:59.078696] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:25:53.380 [2024-11-20 11:39:59.079898] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72465 ] 00:25:53.638 [2024-11-20 11:39:59.272395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.638 [2024-11-20 11:39:59.397282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:54.573 11:40:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:54.573 11:40:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:25:54.573 11:40:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:25:54.573 11:40:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.573 11:40:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:54.573 xnvme_bdev 00:25:54.573 11:40:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.573 11:40:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:25:54.573 11:40:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:25:54.573 11:40:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:25:54.573 11:40:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.573 11:40:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:54.573 11:40:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.573 11:40:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:25:54.573 11:40:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:25:54.573 11:40:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:25:54.573 11:40:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:25:54.573 11:40:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.573 11:40:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:54.573 11:40:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.573 11:40:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:25:54.573 11:40:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:25:54.573 11:40:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:25:54.573 11:40:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:25:54.573 11:40:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.573 11:40:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:54.832 11:40:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.832 11:40:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:25:54.832 11:40:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:25:54.832 11:40:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:25:54.832 11:40:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.832 11:40:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:54.832 11:40:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:25:54.832 11:40:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.832 11:40:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:25:54.832 11:40:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:25:54.832 11:40:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.832 11:40:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:54.832 11:40:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.832 11:40:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72465 00:25:54.832 11:40:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72465 ']' 00:25:54.832 11:40:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72465 00:25:54.832 11:40:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:25:54.832 11:40:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:54.832 11:40:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72465 00:25:54.832 killing process with pid 72465 00:25:54.832 11:40:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:54.832 11:40:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:54.832 11:40:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72465' 00:25:54.832 11:40:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72465 00:25:54.832 11:40:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72465 00:25:57.375 ************************************ 00:25:57.375 END TEST xnvme_rpc 00:25:57.375 ************************************ 00:25:57.375 00:25:57.375 real 0m3.627s 00:25:57.375 user 0m3.799s 00:25:57.375 sys 0m0.589s 00:25:57.375 11:40:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:57.375 11:40:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:57.375 11:40:02 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:25:57.375 11:40:02 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:57.375 11:40:02 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:57.375 11:40:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:25:57.375 ************************************ 00:25:57.375 START TEST xnvme_bdevperf 00:25:57.375 ************************************ 00:25:57.375 11:40:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:25:57.375 11:40:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:25:57.375 11:40:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:25:57.375 11:40:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:25:57.375 11:40:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:25:57.375 11:40:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:25:57.375 11:40:02 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:25:57.375 11:40:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:57.375 { 00:25:57.375 "subsystems": [ 00:25:57.375 { 00:25:57.375 "subsystem": "bdev", 00:25:57.375 "config": [ 00:25:57.375 { 00:25:57.375 "params": { 00:25:57.375 "io_mechanism": "io_uring_cmd", 00:25:57.375 "conserve_cpu": false, 00:25:57.375 "filename": "/dev/ng0n1", 00:25:57.375 "name": "xnvme_bdev" 00:25:57.375 }, 00:25:57.375 "method": "bdev_xnvme_create" 00:25:57.375 }, 00:25:57.375 { 00:25:57.375 "method": "bdev_wait_for_examine" 00:25:57.375 } 00:25:57.375 ] 00:25:57.375 } 00:25:57.375 ] 00:25:57.375 } 00:25:57.375 [2024-11-20 11:40:02.738037] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:25:57.375 [2024-11-20 11:40:02.738588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72545 ] 00:25:57.375 [2024-11-20 11:40:02.924521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.375 [2024-11-20 11:40:03.039921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:57.633 Running I/O for 5 seconds... 00:25:59.944 50624.00 IOPS, 197.75 MiB/s [2024-11-20T11:40:06.647Z] 50496.00 IOPS, 197.25 MiB/s [2024-11-20T11:40:07.583Z] 50602.67 IOPS, 197.67 MiB/s [2024-11-20T11:40:08.518Z] 51408.00 IOPS, 200.81 MiB/s [2024-11-20T11:40:08.518Z] 51507.20 IOPS, 201.20 MiB/s 00:26:02.752 Latency(us) 00:26:02.752 [2024-11-20T11:40:08.518Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.752 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:26:02.752 xnvme_bdev : 5.00 51475.38 201.08 0.00 0.00 1239.34 837.82 3336.38 00:26:02.752 [2024-11-20T11:40:08.518Z] =================================================================================================================== 00:26:02.752 [2024-11-20T11:40:08.518Z] Total : 51475.38 201.08 0.00 0.00 1239.34 837.82 3336.38 00:26:03.686 11:40:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:26:03.686 11:40:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:26:03.686 11:40:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:26:03.686 11:40:09 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:26:03.686 11:40:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:03.686 { 00:26:03.686 "subsystems": [ 00:26:03.686 { 00:26:03.686 "subsystem": "bdev", 00:26:03.686 "config": [ 00:26:03.686 { 00:26:03.686 "params": { 00:26:03.686 "io_mechanism": "io_uring_cmd", 00:26:03.686 "conserve_cpu": false, 00:26:03.686 "filename": "/dev/ng0n1", 00:26:03.686 "name": "xnvme_bdev" 00:26:03.686 }, 00:26:03.686 "method": "bdev_xnvme_create" 00:26:03.686 }, 00:26:03.686 { 00:26:03.686 "method": "bdev_wait_for_examine" 00:26:03.686 } 00:26:03.686 ] 00:26:03.686 } 00:26:03.686 ] 00:26:03.686 } 00:26:03.686 [2024-11-20 11:40:09.422814] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:26:03.686 [2024-11-20 11:40:09.423228] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72624 ] 00:26:03.945 [2024-11-20 11:40:09.606321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.203 [2024-11-20 11:40:09.728675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.462 Running I/O for 5 seconds... 00:26:06.335 43456.00 IOPS, 169.75 MiB/s [2024-11-20T11:40:13.478Z] 42705.00 IOPS, 166.82 MiB/s [2024-11-20T11:40:14.500Z] 41605.67 IOPS, 162.52 MiB/s [2024-11-20T11:40:15.068Z] 40651.50 IOPS, 158.79 MiB/s 00:26:09.302 Latency(us) 00:26:09.302 [2024-11-20T11:40:15.068Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.302 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:26:09.302 xnvme_bdev : 5.00 41135.81 160.69 0.00 0.00 1550.74 106.12 16205.27 00:26:09.302 [2024-11-20T11:40:15.068Z] =================================================================================================================== 00:26:09.302 [2024-11-20T11:40:15.068Z] Total : 41135.81 160.69 0.00 0.00 1550.74 106.12 16205.27 00:26:10.678 11:40:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:26:10.678 11:40:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:26:10.678 11:40:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:26:10.678 11:40:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:26:10.678 11:40:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:10.678 { 00:26:10.678 "subsystems": [ 00:26:10.678 { 00:26:10.678 "subsystem": "bdev", 00:26:10.678 "config": [ 00:26:10.678 { 00:26:10.678 "params": { 00:26:10.678 "io_mechanism": "io_uring_cmd", 00:26:10.678 "conserve_cpu": false, 00:26:10.678 "filename": "/dev/ng0n1", 00:26:10.678 "name": "xnvme_bdev" 00:26:10.678 }, 00:26:10.678 "method": "bdev_xnvme_create" 00:26:10.678 }, 00:26:10.678 { 00:26:10.678 "method": "bdev_wait_for_examine" 00:26:10.678 } 00:26:10.678 ] 00:26:10.678 } 00:26:10.678 ] 00:26:10.678 } 00:26:10.678 [2024-11-20 11:40:16.205261] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:26:10.678 [2024-11-20 11:40:16.206026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72694 ] 00:26:10.678 [2024-11-20 11:40:16.403017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.937 [2024-11-20 11:40:16.538333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.196 Running I/O for 5 seconds... 00:26:13.510 71616.00 IOPS, 279.75 MiB/s [2024-11-20T11:40:20.212Z] 71712.00 IOPS, 280.12 MiB/s [2024-11-20T11:40:21.149Z] 71317.33 IOPS, 278.58 MiB/s [2024-11-20T11:40:22.089Z] 72512.00 IOPS, 283.25 MiB/s 00:26:16.323 Latency(us) 00:26:16.323 [2024-11-20T11:40:22.089Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.323 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:26:16.323 xnvme_bdev : 5.00 73131.59 285.67 0.00 0.00 871.38 510.14 3172.54 00:26:16.323 [2024-11-20T11:40:22.089Z] =================================================================================================================== 00:26:16.323 [2024-11-20T11:40:22.089Z] Total : 73131.59 285.67 0.00 0.00 871.38 510.14 3172.54 00:26:17.329 11:40:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:26:17.329 11:40:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:26:17.329 11:40:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:26:17.329 11:40:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:26:17.329 11:40:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:17.329 { 00:26:17.329 "subsystems": [ 00:26:17.329 { 00:26:17.329 "subsystem": "bdev", 00:26:17.329 "config": [ 00:26:17.329 { 00:26:17.329 "params": { 00:26:17.329 "io_mechanism": "io_uring_cmd", 00:26:17.329 "conserve_cpu": false, 00:26:17.329 "filename": "/dev/ng0n1", 00:26:17.329 "name": "xnvme_bdev" 00:26:17.329 }, 00:26:17.329 "method": "bdev_xnvme_create" 00:26:17.329 }, 00:26:17.329 { 00:26:17.329 "method": "bdev_wait_for_examine" 00:26:17.329 } 00:26:17.329 ] 00:26:17.329 } 00:26:17.329 ] 00:26:17.329 } 00:26:17.329 [2024-11-20 11:40:23.010093] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:26:17.329 [2024-11-20 11:40:23.010281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72776 ] 00:26:17.588 [2024-11-20 11:40:23.194396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.588 [2024-11-20 11:40:23.305342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.155 Running I/O for 5 seconds... 00:26:20.025 18968.00 IOPS, 74.09 MiB/s [2024-11-20T11:40:26.725Z] 32829.00 IOPS, 128.24 MiB/s [2024-11-20T11:40:27.658Z] 37073.33 IOPS, 144.82 MiB/s [2024-11-20T11:40:28.646Z] 39656.75 IOPS, 154.91 MiB/s [2024-11-20T11:40:28.646Z] 40786.00 IOPS, 159.32 MiB/s 00:26:22.880 Latency(us) 00:26:22.880 [2024-11-20T11:40:28.646Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:22.880 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:26:22.880 xnvme_bdev : 5.00 40762.69 159.23 0.00 0.00 1565.45 98.68 29669.93 00:26:22.880 [2024-11-20T11:40:28.646Z] =================================================================================================================== 00:26:22.880 [2024-11-20T11:40:28.646Z] Total : 40762.69 159.23 0.00 0.00 1565.45 98.68 29669.93 00:26:24.256 ************************************ 00:26:24.256 END TEST xnvme_bdevperf 00:26:24.256 ************************************ 00:26:24.256 00:26:24.256 real 0m27.000s 00:26:24.256 user 0m14.502s 00:26:24.256 sys 0m12.064s 00:26:24.256 11:40:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:24.256 11:40:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:24.256 11:40:29 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:26:24.256 11:40:29 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:24.256 11:40:29 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:24.256 11:40:29 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:26:24.256 ************************************ 00:26:24.256 START TEST xnvme_fio_plugin 00:26:24.256 ************************************ 00:26:24.256 11:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:26:24.256 11:40:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:26:24.256 11:40:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:26:24.256 11:40:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:26:24.256 11:40:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:26:24.256 11:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:26:24.256 11:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:24.256 11:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:24.256 11:40:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:26:24.256 11:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:24.256 11:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:24.256 11:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:26:24.256 11:40:29 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:26:24.256 11:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:24.256 11:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:24.256 11:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:26:24.256 11:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:24.256 11:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:26:24.256 11:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:24.256 11:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:24.256 11:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:24.256 11:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:26:24.256 11:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:24.256 11:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:26:24.256 { 00:26:24.256 "subsystems": [ 00:26:24.256 { 00:26:24.256 "subsystem": "bdev", 00:26:24.256 "config": [ 00:26:24.256 { 00:26:24.256 "params": { 00:26:24.256 "io_mechanism": "io_uring_cmd", 00:26:24.256 "conserve_cpu": false, 00:26:24.256 "filename": "/dev/ng0n1", 00:26:24.256 "name": "xnvme_bdev" 00:26:24.256 }, 00:26:24.256 "method": "bdev_xnvme_create" 00:26:24.256 }, 00:26:24.256 { 00:26:24.256 "method": "bdev_wait_for_examine" 00:26:24.256 } 00:26:24.256 ] 00:26:24.256 } 00:26:24.256 ] 00:26:24.256 } 00:26:24.256 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:26:24.256 fio-3.35 00:26:24.256 Starting 1 thread 00:26:30.817 00:26:30.817 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72896: Wed Nov 20 11:40:35 2024 00:26:30.817 read: IOPS=48.8k, BW=191MiB/s (200MB/s)(953MiB/5001msec) 00:26:30.817 slat (usec): min=2, max=141, avg= 3.61, stdev= 2.61 00:26:30.817 clat (usec): min=613, max=4535, avg=1165.06, stdev=131.06 00:26:30.817 lat (usec): min=617, max=4537, avg=1168.66, stdev=131.46 00:26:30.817 clat percentiles (usec): 00:26:30.817 | 1.00th=[ 922], 5.00th=[ 979], 10.00th=[ 1012], 20.00th=[ 1057], 00:26:30.817 | 30.00th=[ 1090], 40.00th=[ 1123], 50.00th=[ 1156], 60.00th=[ 1188], 00:26:30.817 | 70.00th=[ 1221], 80.00th=[ 1270], 90.00th=[ 1336], 95.00th=[ 1385], 00:26:30.817 | 99.00th=[ 1565], 99.50th=[ 1631], 99.90th=[ 1795], 99.95th=[ 1876], 00:26:30.817 | 99.99th=[ 2507] 00:26:30.817 bw ( KiB/s): min=185344, max=207360, per=100.00%, avg=196663.11, stdev=6993.35, samples=9 00:26:30.817 iops : min=46336, max=51840, avg=49165.78, stdev=1748.34, samples=9 00:26:30.817 lat (usec) : 750=0.01%, 1000=7.60% 00:26:30.817 lat (msec) : 2=92.35%, 4=0.03%, 10=0.01% 00:26:30.817 cpu : usr=34.58%, sys=64.22%, ctx=16, majf=0, minf=762 00:26:30.817 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:26:30.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.817 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:26:30.817 issued rwts: total=244061,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.817 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:30.817 00:26:30.817 Run status group 0 (all jobs): 00:26:30.818 READ: bw=191MiB/s (200MB/s), 191MiB/s-191MiB/s (200MB/s-200MB/s), io=953MiB (1000MB), run=5001-5001msec 00:26:31.385 ----------------------------------------------------- 00:26:31.385 Suppressions used: 00:26:31.385 count bytes template 00:26:31.385 1 11 /usr/src/fio/parse.c 00:26:31.385 1 8 libtcmalloc_minimal.so 00:26:31.385 1 904 libcrypto.so 00:26:31.385 ----------------------------------------------------- 00:26:31.385 00:26:31.385 11:40:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:26:31.385 11:40:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:26:31.385 11:40:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:26:31.385 11:40:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:26:31.385 11:40:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:31.385 11:40:37 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:26:31.385 11:40:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:31.385 11:40:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:26:31.385 11:40:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:31.385 11:40:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:31.385 11:40:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:26:31.385 11:40:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:31.385 11:40:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:31.385 11:40:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:31.385 11:40:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:26:31.385 11:40:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:31.644 11:40:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:31.644 11:40:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:31.644 11:40:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:26:31.644 11:40:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:31.644 11:40:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:26:31.644 { 00:26:31.644 "subsystems": [ 00:26:31.644 { 00:26:31.644 "subsystem": "bdev", 00:26:31.644 "config": [ 00:26:31.644 { 00:26:31.644 "params": { 00:26:31.644 "io_mechanism": "io_uring_cmd", 00:26:31.644 "conserve_cpu": false, 00:26:31.644 "filename": "/dev/ng0n1", 00:26:31.644 "name": "xnvme_bdev" 00:26:31.644 }, 00:26:31.644 "method": "bdev_xnvme_create" 00:26:31.644 }, 00:26:31.644 { 00:26:31.644 "method": "bdev_wait_for_examine" 00:26:31.644 } 00:26:31.644 ] 00:26:31.644 } 00:26:31.644 ] 00:26:31.644 } 00:26:31.901 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:26:31.901 fio-3.35 00:26:31.901 Starting 1 thread 00:26:38.463 00:26:38.463 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72994: Wed Nov 20 11:40:43 2024 00:26:38.463 write: IOPS=43.6k, BW=170MiB/s (178MB/s)(851MiB/5001msec); 0 zone resets 00:26:38.463 slat (nsec): min=2424, max=71902, avg=4813.16, stdev=2458.01 00:26:38.463 clat (usec): min=773, max=3899, avg=1276.16, stdev=180.32 00:26:38.463 lat (usec): min=776, max=3906, avg=1280.97, stdev=181.02 00:26:38.463 clat percentiles (usec): 00:26:38.463 | 1.00th=[ 988], 5.00th=[ 1045], 10.00th=[ 1090], 20.00th=[ 1139], 00:26:38.463 | 30.00th=[ 1172], 40.00th=[ 1221], 50.00th=[ 1254], 60.00th=[ 1287], 00:26:38.463 | 70.00th=[ 1336], 80.00th=[ 1385], 90.00th=[ 1483], 95.00th=[ 1614], 00:26:38.463 | 99.00th=[ 1844], 99.50th=[ 1942], 99.90th=[ 2245], 99.95th=[ 2671], 00:26:38.463 | 99.99th=[ 3818] 00:26:38.463 bw ( KiB/s): min=169984, max=179712, per=100.00%, avg=174364.44, stdev=3115.54, samples=9 00:26:38.463 iops : min=42496, max=44928, avg=43591.11, stdev=778.89, samples=9 00:26:38.463 lat (usec) : 1000=1.49% 00:26:38.463 lat (msec) : 2=98.17%, 4=0.34% 00:26:38.463 cpu : usr=41.64%, sys=57.26%, ctx=13, majf=0, minf=762 00:26:38.463 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:26:38.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.463 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:26:38.463 issued rwts: total=0,217920,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.463 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:38.463 00:26:38.463 Run status group 0 (all jobs): 00:26:38.463 WRITE: bw=170MiB/s (178MB/s), 170MiB/s-170MiB/s (178MB/s-178MB/s), io=851MiB (893MB), run=5001-5001msec 00:26:39.030 ----------------------------------------------------- 00:26:39.030 Suppressions used: 00:26:39.030 count bytes template 00:26:39.030 1 11 /usr/src/fio/parse.c 00:26:39.030 1 8 libtcmalloc_minimal.so 00:26:39.030 1 904 libcrypto.so 00:26:39.030 ----------------------------------------------------- 00:26:39.030 00:26:39.030 00:26:39.030 real 0m14.833s 00:26:39.030 user 0m7.584s 00:26:39.030 sys 0m6.866s 00:26:39.030 11:40:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:39.030 ************************************ 00:26:39.030 END TEST xnvme_fio_plugin 00:26:39.030 ************************************ 00:26:39.030 11:40:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:26:39.030 11:40:44 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:26:39.030 11:40:44 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:26:39.030 11:40:44 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:26:39.030 11:40:44 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:26:39.030 11:40:44 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:39.030 11:40:44 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:39.030 11:40:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:26:39.030 ************************************ 00:26:39.030 START TEST xnvme_rpc 00:26:39.030 ************************************ 00:26:39.030 11:40:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:26:39.030 11:40:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:26:39.030 11:40:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:26:39.030 11:40:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:26:39.030 11:40:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:26:39.030 11:40:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73075 00:26:39.030 11:40:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:39.030 11:40:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73075 00:26:39.030 11:40:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73075 ']' 00:26:39.030 11:40:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:39.030 11:40:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:39.030 11:40:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:39.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:39.030 11:40:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:39.030 11:40:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:39.030 [2024-11-20 11:40:44.703229] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:26:39.031 [2024-11-20 11:40:44.703416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73075 ] 00:26:39.289 [2024-11-20 11:40:44.889444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.289 [2024-11-20 11:40:45.003447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.225 11:40:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:40.225 11:40:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:26:40.225 11:40:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:26:40.225 11:40:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.225 11:40:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:40.225 xnvme_bdev 00:26:40.225 11:40:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.225 11:40:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:26:40.225 11:40:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:26:40.225 11:40:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.225 11:40:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:26:40.225 11:40:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:40.225 11:40:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.225 11:40:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:26:40.225 11:40:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:26:40.225 11:40:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:26:40.225 11:40:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.225 11:40:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:40.225 11:40:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:26:40.225 11:40:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.225 11:40:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:26:40.225 11:40:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:26:40.225 11:40:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:26:40.225 11:40:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.225 11:40:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:40.225 11:40:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:26:40.225 11:40:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.484 11:40:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:26:40.484 11:40:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:26:40.484 11:40:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:26:40.484 11:40:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:26:40.484 11:40:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.484 11:40:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:40.484 11:40:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.484 11:40:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:26:40.484 11:40:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:26:40.484 11:40:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.484 11:40:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:40.484 11:40:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.484 11:40:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73075 00:26:40.484 11:40:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73075 ']' 00:26:40.484 11:40:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73075 00:26:40.484 11:40:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:26:40.484 11:40:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:40.484 11:40:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73075 00:26:40.484 killing process with pid 73075 00:26:40.484 11:40:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:40.484 11:40:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:40.484 11:40:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73075' 00:26:40.484 11:40:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73075 00:26:40.484 11:40:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73075 00:26:43.016 00:26:43.016 real 0m3.608s 00:26:43.016 user 0m3.754s 00:26:43.016 sys 0m0.565s 00:26:43.016 11:40:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:43.016 ************************************ 00:26:43.016 END TEST xnvme_rpc 00:26:43.016 ************************************ 00:26:43.016 11:40:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:43.016 11:40:48 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:26:43.016 11:40:48 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:43.016 11:40:48 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:43.016 11:40:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:26:43.016 ************************************ 00:26:43.016 START TEST xnvme_bdevperf 00:26:43.016 ************************************ 00:26:43.016 11:40:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:26:43.016 11:40:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:26:43.016 11:40:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:26:43.016 11:40:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:26:43.016 11:40:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:26:43.016 11:40:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:26:43.016 11:40:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:26:43.016 11:40:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.016 { 00:26:43.017 "subsystems": [ 00:26:43.017 { 00:26:43.017 "subsystem": "bdev", 00:26:43.017 "config": [ 00:26:43.017 { 00:26:43.017 "params": { 00:26:43.017 "io_mechanism": "io_uring_cmd", 00:26:43.017 "conserve_cpu": true, 00:26:43.017 "filename": "/dev/ng0n1", 00:26:43.017 "name": "xnvme_bdev" 00:26:43.017 }, 00:26:43.017 "method": "bdev_xnvme_create" 00:26:43.017 }, 00:26:43.017 { 00:26:43.017 "method": "bdev_wait_for_examine" 00:26:43.017 } 00:26:43.017 ] 00:26:43.017 } 00:26:43.017 ] 00:26:43.017 } 00:26:43.017 [2024-11-20 11:40:48.347088] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:26:43.017 [2024-11-20 11:40:48.347487] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73155 ] 00:26:43.017 [2024-11-20 11:40:48.537273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.017 [2024-11-20 11:40:48.693219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:43.275 Running I/O for 5 seconds... 00:26:45.618 49664.00 IOPS, 194.00 MiB/s [2024-11-20T11:40:52.318Z] 49696.00 IOPS, 194.12 MiB/s [2024-11-20T11:40:53.254Z] 50261.33 IOPS, 196.33 MiB/s [2024-11-20T11:40:54.193Z] 50048.00 IOPS, 195.50 MiB/s [2024-11-20T11:40:54.193Z] 50022.40 IOPS, 195.40 MiB/s 00:26:48.427 Latency(us) 00:26:48.427 [2024-11-20T11:40:54.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:48.427 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:26:48.427 xnvme_bdev : 5.00 49988.04 195.27 0.00 0.00 1276.24 808.03 4051.32 00:26:48.427 [2024-11-20T11:40:54.193Z] =================================================================================================================== 00:26:48.427 [2024-11-20T11:40:54.193Z] Total : 49988.04 195.27 0.00 0.00 1276.24 808.03 4051.32 00:26:49.408 11:40:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:26:49.408 11:40:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:26:49.408 11:40:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:26:49.408 11:40:54 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:26:49.408 11:40:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:49.408 { 00:26:49.408 "subsystems": [ 00:26:49.408 { 00:26:49.408 "subsystem": "bdev", 00:26:49.408 "config": [ 00:26:49.408 { 00:26:49.408 "params": { 00:26:49.408 "io_mechanism": "io_uring_cmd", 00:26:49.408 "conserve_cpu": true, 00:26:49.408 "filename": "/dev/ng0n1", 00:26:49.408 "name": "xnvme_bdev" 00:26:49.408 }, 00:26:49.408 "method": "bdev_xnvme_create" 00:26:49.408 }, 00:26:49.408 { 00:26:49.408 "method": "bdev_wait_for_examine" 00:26:49.408 } 00:26:49.408 ] 00:26:49.408 } 00:26:49.408 ] 00:26:49.408 } 00:26:49.408 [2024-11-20 11:40:55.054918] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:26:49.408 [2024-11-20 11:40:55.055374] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73233 ] 00:26:49.695 [2024-11-20 11:40:55.224338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.695 [2024-11-20 11:40:55.334028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.954 Running I/O for 5 seconds... 00:26:52.267 40983.00 IOPS, 160.09 MiB/s [2024-11-20T11:40:58.971Z] 41899.50 IOPS, 163.67 MiB/s [2024-11-20T11:40:59.905Z] 42567.67 IOPS, 166.28 MiB/s [2024-11-20T11:41:00.840Z] 42901.75 IOPS, 167.58 MiB/s [2024-11-20T11:41:00.840Z] 42974.20 IOPS, 167.87 MiB/s 00:26:55.074 Latency(us) 00:26:55.074 [2024-11-20T11:41:00.840Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:55.074 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:26:55.074 xnvme_bdev : 5.01 42917.05 167.64 0.00 0.00 1485.85 96.35 7298.33 00:26:55.074 [2024-11-20T11:41:00.840Z] =================================================================================================================== 00:26:55.074 [2024-11-20T11:41:00.840Z] Total : 42917.05 167.64 0.00 0.00 1485.85 96.35 7298.33 00:26:56.032 11:41:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:26:56.032 11:41:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:26:56.032 11:41:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:26:56.032 11:41:01 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:26:56.032 11:41:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:56.291 { 00:26:56.291 "subsystems": [ 00:26:56.291 { 00:26:56.291 "subsystem": "bdev", 00:26:56.291 "config": [ 00:26:56.291 { 00:26:56.291 "params": { 00:26:56.291 "io_mechanism": "io_uring_cmd", 00:26:56.291 "conserve_cpu": true, 00:26:56.291 "filename": "/dev/ng0n1", 00:26:56.291 "name": "xnvme_bdev" 00:26:56.291 }, 00:26:56.291 "method": "bdev_xnvme_create" 00:26:56.291 }, 00:26:56.291 { 00:26:56.291 "method": "bdev_wait_for_examine" 00:26:56.291 } 00:26:56.291 ] 00:26:56.291 } 00:26:56.291 ] 00:26:56.291 } 00:26:56.291 [2024-11-20 11:41:01.848845] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:26:56.291 [2024-11-20 11:41:01.849030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73313 ] 00:26:56.291 [2024-11-20 11:41:02.034872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.550 [2024-11-20 11:41:02.151871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.809 Running I/O for 5 seconds... 00:26:59.122 76608.00 IOPS, 299.25 MiB/s [2024-11-20T11:41:05.825Z] 77120.00 IOPS, 301.25 MiB/s [2024-11-20T11:41:06.759Z] 76821.33 IOPS, 300.08 MiB/s [2024-11-20T11:41:07.692Z] 76352.00 IOPS, 298.25 MiB/s 00:27:01.926 Latency(us) 00:27:01.926 [2024-11-20T11:41:07.692Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:01.926 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:27:01.926 xnvme_bdev : 5.00 76245.89 297.84 0.00 0.00 835.88 502.69 3068.28 00:27:01.926 [2024-11-20T11:41:07.692Z] =================================================================================================================== 00:27:01.926 [2024-11-20T11:41:07.692Z] Total : 76245.89 297.84 0.00 0.00 835.88 502.69 3068.28 00:27:02.871 11:41:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:27:02.871 11:41:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:27:02.871 11:41:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:27:02.871 11:41:08 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:27:02.871 11:41:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:02.871 { 00:27:02.871 "subsystems": [ 00:27:02.871 { 00:27:02.871 "subsystem": "bdev", 00:27:02.871 "config": [ 00:27:02.871 { 00:27:02.871 "params": { 00:27:02.871 "io_mechanism": "io_uring_cmd", 00:27:02.871 "conserve_cpu": true, 00:27:02.871 "filename": "/dev/ng0n1", 00:27:02.871 "name": "xnvme_bdev" 00:27:02.871 }, 00:27:02.871 "method": "bdev_xnvme_create" 00:27:02.871 }, 00:27:02.871 { 00:27:02.871 "method": "bdev_wait_for_examine" 00:27:02.871 } 00:27:02.871 ] 00:27:02.871 } 00:27:02.871 ] 00:27:02.871 } 00:27:02.871 [2024-11-20 11:41:08.538838] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:27:02.871 [2024-11-20 11:41:08.538983] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73386 ] 00:27:03.130 [2024-11-20 11:41:08.710115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.130 [2024-11-20 11:41:08.839844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:03.746 Running I/O for 5 seconds... 00:27:05.629 42785.00 IOPS, 167.13 MiB/s [2024-11-20T11:41:12.329Z] 42974.50 IOPS, 167.87 MiB/s [2024-11-20T11:41:13.264Z] 42454.33 IOPS, 165.84 MiB/s [2024-11-20T11:41:14.641Z] 42202.75 IOPS, 164.85 MiB/s [2024-11-20T11:41:14.641Z] 42023.00 IOPS, 164.15 MiB/s 00:27:08.875 Latency(us) 00:27:08.875 [2024-11-20T11:41:14.641Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:08.875 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:27:08.875 xnvme_bdev : 5.01 41978.98 163.98 0.00 0.00 1516.66 91.69 11319.85 00:27:08.875 [2024-11-20T11:41:14.641Z] =================================================================================================================== 00:27:08.875 [2024-11-20T11:41:14.641Z] Total : 41978.98 163.98 0.00 0.00 1516.66 91.69 11319.85 00:27:09.813 00:27:09.813 real 0m27.256s 00:27:09.813 user 0m17.143s 00:27:09.813 sys 0m7.849s 00:27:09.813 11:41:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:09.813 11:41:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:09.813 ************************************ 00:27:09.813 END TEST xnvme_bdevperf 00:27:09.813 ************************************ 00:27:09.813 11:41:15 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:27:09.813 11:41:15 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:09.813 11:41:15 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:09.813 11:41:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:09.813 ************************************ 00:27:09.813 START TEST xnvme_fio_plugin 00:27:09.813 ************************************ 00:27:09.813 11:41:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:27:09.813 11:41:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:27:09.813 11:41:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:27:09.813 11:41:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:27:09.813 11:41:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:27:09.813 11:41:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:27:09.813 11:41:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:09.813 11:41:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:09.813 11:41:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:09.813 11:41:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:27:09.813 11:41:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:09.813 11:41:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:27:09.813 11:41:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:09.813 11:41:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:09.813 11:41:15 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:27:09.813 11:41:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:27:09.813 11:41:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:09.813 11:41:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:27:09.813 11:41:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:09.813 11:41:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:09.813 11:41:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:10.072 11:41:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:27:10.072 11:41:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:10.072 11:41:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:27:10.072 { 00:27:10.072 "subsystems": [ 00:27:10.072 { 00:27:10.072 "subsystem": "bdev", 00:27:10.072 "config": [ 00:27:10.072 { 00:27:10.072 "params": { 00:27:10.072 "io_mechanism": "io_uring_cmd", 00:27:10.072 "conserve_cpu": true, 00:27:10.072 "filename": "/dev/ng0n1", 00:27:10.072 "name": "xnvme_bdev" 00:27:10.072 }, 00:27:10.072 "method": "bdev_xnvme_create" 00:27:10.072 }, 00:27:10.072 { 00:27:10.072 "method": "bdev_wait_for_examine" 00:27:10.072 } 00:27:10.072 ] 00:27:10.072 } 00:27:10.072 ] 00:27:10.072 } 00:27:10.072 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:27:10.072 fio-3.35 00:27:10.072 Starting 1 thread 00:27:16.640 00:27:16.640 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73506: Wed Nov 20 11:41:21 2024 00:27:16.640 read: IOPS=47.8k, BW=187MiB/s (196MB/s)(934MiB/5001msec) 00:27:16.640 slat (usec): min=2, max=100, avg= 4.14, stdev= 1.96 00:27:16.640 clat (usec): min=787, max=3942, avg=1171.69, stdev=162.88 00:27:16.640 lat (usec): min=790, max=3950, avg=1175.83, stdev=163.32 00:27:16.640 clat percentiles (usec): 00:27:16.640 | 1.00th=[ 889], 5.00th=[ 955], 10.00th=[ 996], 20.00th=[ 1045], 00:27:16.640 | 30.00th=[ 1090], 40.00th=[ 1123], 50.00th=[ 1156], 60.00th=[ 1188], 00:27:16.640 | 70.00th=[ 1221], 80.00th=[ 1270], 90.00th=[ 1352], 95.00th=[ 1450], 00:27:16.640 | 99.00th=[ 1696], 99.50th=[ 1762], 99.90th=[ 2114], 99.95th=[ 2868], 00:27:16.640 | 99.99th=[ 3851] 00:27:16.640 bw ( KiB/s): min=167424, max=216576, per=100.00%, avg=191146.67, stdev=12593.53, samples=9 00:27:16.640 iops : min=41856, max=54144, avg=47786.67, stdev=3148.38, samples=9 00:27:16.640 lat (usec) : 1000=10.44% 00:27:16.640 lat (msec) : 2=89.43%, 4=0.13% 00:27:16.640 cpu : usr=50.56%, sys=46.12%, ctx=11, majf=0, minf=762 00:27:16.640 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:27:16.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.640 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:27:16.640 issued rwts: total=238976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.640 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:16.640 00:27:16.640 Run status group 0 (all jobs): 00:27:16.640 READ: bw=187MiB/s (196MB/s), 187MiB/s-187MiB/s (196MB/s-196MB/s), io=934MiB (979MB), run=5001-5001msec 00:27:17.578 ----------------------------------------------------- 00:27:17.578 Suppressions used: 00:27:17.578 count bytes template 00:27:17.578 1 11 /usr/src/fio/parse.c 00:27:17.578 1 8 libtcmalloc_minimal.so 00:27:17.578 1 904 libcrypto.so 00:27:17.578 ----------------------------------------------------- 00:27:17.578 00:27:17.578 11:41:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:27:17.578 11:41:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:27:17.578 11:41:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:27:17.578 11:41:23 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:27:17.578 11:41:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:27:17.578 11:41:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:27:17.578 11:41:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:17.578 11:41:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:17.578 11:41:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:17.578 11:41:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:17.579 11:41:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:27:17.579 11:41:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:17.579 11:41:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:17.579 11:41:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:17.579 11:41:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:27:17.579 11:41:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:17.579 11:41:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:17.579 11:41:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:17.579 11:41:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:27:17.579 11:41:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:17.579 11:41:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:27:17.579 { 00:27:17.579 "subsystems": [ 00:27:17.579 { 00:27:17.579 "subsystem": "bdev", 00:27:17.579 "config": [ 00:27:17.579 { 00:27:17.579 "params": { 00:27:17.579 "io_mechanism": "io_uring_cmd", 00:27:17.579 "conserve_cpu": true, 00:27:17.579 "filename": "/dev/ng0n1", 00:27:17.579 "name": "xnvme_bdev" 00:27:17.579 }, 00:27:17.579 "method": "bdev_xnvme_create" 00:27:17.579 }, 00:27:17.579 { 00:27:17.579 "method": "bdev_wait_for_examine" 00:27:17.579 } 00:27:17.579 ] 00:27:17.579 } 00:27:17.579 ] 00:27:17.579 } 00:27:17.838 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:27:17.838 fio-3.35 00:27:17.838 Starting 1 thread 00:27:24.402 00:27:24.402 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73607: Wed Nov 20 11:41:29 2024 00:27:24.402 write: IOPS=40.0k, BW=156MiB/s (164MB/s)(781MiB/5001msec); 0 zone resets 00:27:24.402 slat (usec): min=2, max=220, avg= 5.42, stdev= 4.50 00:27:24.402 clat (usec): min=61, max=12603, avg=1432.68, stdev=1018.89 00:27:24.402 lat (usec): min=67, max=12609, avg=1438.11, stdev=1019.16 00:27:24.402 clat percentiles (usec): 00:27:24.402 | 1.00th=[ 165], 5.00th=[ 334], 10.00th=[ 603], 20.00th=[ 1045], 00:27:24.402 | 30.00th=[ 1106], 40.00th=[ 1156], 50.00th=[ 1205], 60.00th=[ 1254], 00:27:24.402 | 70.00th=[ 1319], 80.00th=[ 1418], 90.00th=[ 2376], 95.00th=[ 4178], 00:27:24.402 | 99.00th=[ 5211], 99.50th=[ 5538], 99.90th=[ 6718], 99.95th=[ 8356], 00:27:24.402 | 99.99th=[12256] 00:27:24.402 bw ( KiB/s): min=126944, max=182272, per=98.39%, avg=157258.67, stdev=22485.34, samples=9 00:27:24.402 iops : min=31736, max=45568, avg=39314.67, stdev=5621.33, samples=9 00:27:24.402 lat (usec) : 100=0.09%, 250=2.97%, 500=5.27%, 750=3.51%, 1000=4.26% 00:27:24.402 lat (msec) : 2=73.43%, 4=4.69%, 10=5.76%, 20=0.03% 00:27:24.402 cpu : usr=51.32%, sys=39.56%, ctx=55, majf=0, minf=762 00:27:24.402 IO depths : 1=1.2%, 2=2.3%, 4=4.7%, 8=9.4%, 16=19.6%, 32=58.1%, >=64=4.8% 00:27:24.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.402 complete : 0=0.0%, 4=97.5%, 8=0.6%, 16=0.4%, 32=0.3%, 64=1.2%, >=64=0.0% 00:27:24.402 issued rwts: total=0,199828,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.402 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:24.402 00:27:24.402 Run status group 0 (all jobs): 00:27:24.402 WRITE: bw=156MiB/s (164MB/s), 156MiB/s-156MiB/s (164MB/s-164MB/s), io=781MiB (818MB), run=5001-5001msec 00:27:24.970 ----------------------------------------------------- 00:27:24.970 Suppressions used: 00:27:24.970 count bytes template 00:27:24.970 1 11 /usr/src/fio/parse.c 00:27:24.970 1 8 libtcmalloc_minimal.so 00:27:24.970 1 904 libcrypto.so 00:27:24.970 ----------------------------------------------------- 00:27:24.970 00:27:24.970 ************************************ 00:27:24.970 END TEST xnvme_fio_plugin 00:27:24.970 ************************************ 00:27:24.970 00:27:24.970 real 0m14.941s 00:27:24.970 user 0m8.881s 00:27:24.970 sys 0m5.126s 00:27:24.970 11:41:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:24.970 11:41:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:27:24.970 11:41:30 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 73075 00:27:24.970 11:41:30 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73075 ']' 00:27:24.970 11:41:30 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 73075 00:27:24.970 Process with pid 73075 is not found 00:27:24.970 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73075) - No such process 00:27:24.970 11:41:30 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 73075 is not found' 00:27:24.970 00:27:24.970 real 3m48.653s 00:27:24.970 user 2m6.916s 00:27:24.970 sys 1m25.107s 00:27:24.970 11:41:30 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:24.970 11:41:30 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:24.970 ************************************ 00:27:24.970 END TEST nvme_xnvme 00:27:24.970 ************************************ 00:27:24.970 11:41:30 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:27:24.970 11:41:30 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:24.970 11:41:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:24.970 11:41:30 -- common/autotest_common.sh@10 -- # set +x 00:27:24.970 ************************************ 00:27:24.970 START TEST blockdev_xnvme 00:27:24.970 ************************************ 00:27:24.970 11:41:30 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:27:24.970 * Looking for test storage... 00:27:24.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:27:24.970 11:41:30 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:24.970 11:41:30 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:27:24.970 11:41:30 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:25.229 11:41:30 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:25.229 11:41:30 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:25.229 11:41:30 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:25.229 11:41:30 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:25.229 11:41:30 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:27:25.229 11:41:30 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:27:25.229 11:41:30 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:27:25.229 11:41:30 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:27:25.229 11:41:30 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:27:25.229 11:41:30 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:27:25.229 11:41:30 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:27:25.230 11:41:30 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:25.230 11:41:30 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:27:25.230 11:41:30 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:27:25.230 11:41:30 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:25.230 11:41:30 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:25.230 11:41:30 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:27:25.230 11:41:30 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:27:25.230 11:41:30 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:25.230 11:41:30 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:27:25.230 11:41:30 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:27:25.230 11:41:30 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:27:25.230 11:41:30 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:27:25.230 11:41:30 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:25.230 11:41:30 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:27:25.230 11:41:30 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:27:25.230 11:41:30 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:25.230 11:41:30 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:25.230 11:41:30 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:27:25.230 11:41:30 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:25.230 11:41:30 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:25.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.230 --rc genhtml_branch_coverage=1 00:27:25.230 --rc genhtml_function_coverage=1 00:27:25.230 --rc genhtml_legend=1 00:27:25.230 --rc geninfo_all_blocks=1 00:27:25.230 --rc geninfo_unexecuted_blocks=1 00:27:25.230 00:27:25.230 ' 00:27:25.230 11:41:30 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:25.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.230 --rc genhtml_branch_coverage=1 00:27:25.230 --rc genhtml_function_coverage=1 00:27:25.230 --rc genhtml_legend=1 00:27:25.230 --rc geninfo_all_blocks=1 00:27:25.230 --rc geninfo_unexecuted_blocks=1 00:27:25.230 00:27:25.230 ' 00:27:25.230 11:41:30 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:25.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.230 --rc genhtml_branch_coverage=1 00:27:25.230 --rc genhtml_function_coverage=1 00:27:25.230 --rc genhtml_legend=1 00:27:25.230 --rc geninfo_all_blocks=1 00:27:25.230 --rc geninfo_unexecuted_blocks=1 00:27:25.230 00:27:25.230 ' 00:27:25.230 11:41:30 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:25.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.230 --rc genhtml_branch_coverage=1 00:27:25.230 --rc genhtml_function_coverage=1 00:27:25.230 --rc genhtml_legend=1 00:27:25.230 --rc geninfo_all_blocks=1 00:27:25.230 --rc geninfo_unexecuted_blocks=1 00:27:25.230 00:27:25.230 ' 00:27:25.230 11:41:30 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:27:25.230 11:41:30 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:27:25.230 11:41:30 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:27:25.230 11:41:30 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:25.230 11:41:30 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:27:25.230 11:41:30 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:27:25.230 11:41:30 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:27:25.230 11:41:30 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:27:25.230 11:41:30 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:27:25.230 11:41:30 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:27:25.230 11:41:30 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:27:25.230 11:41:30 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:27:25.230 11:41:30 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:27:25.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:25.230 11:41:30 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:27:25.230 11:41:30 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:27:25.230 11:41:30 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:27:25.230 11:41:30 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:27:25.230 11:41:30 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:27:25.230 11:41:30 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:27:25.230 11:41:30 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:27:25.230 11:41:30 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:27:25.230 11:41:30 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:27:25.230 11:41:30 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:27:25.230 11:41:30 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:27:25.230 11:41:30 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73737 00:27:25.230 11:41:30 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:27:25.230 11:41:30 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 73737 00:27:25.230 11:41:30 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:27:25.230 11:41:30 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 73737 ']' 00:27:25.230 11:41:30 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:25.230 11:41:30 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:25.230 11:41:30 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:25.230 11:41:30 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:25.230 11:41:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:25.230 [2024-11-20 11:41:30.939314] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:27:25.230 [2024-11-20 11:41:30.939773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73737 ] 00:27:25.489 [2024-11-20 11:41:31.131417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.747 [2024-11-20 11:41:31.316538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.685 11:41:32 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:26.685 11:41:32 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:27:26.685 11:41:32 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:27:26.685 11:41:32 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:27:26.685 11:41:32 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:27:26.685 11:41:32 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:27:26.685 11:41:32 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:26.945 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:27.512 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:27:27.512 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:27:27.512 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:27:27.772 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:27:27.772 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2c2n1 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2c2n1 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:27.772 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:27:27.772 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:27:27.772 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:27:27.772 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:27:27.772 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:27:27.772 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:27:27.772 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:27:27.772 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:27:27.772 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:27:27.772 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n2 ]] 00:27:27.772 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:27:27.772 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:27:27.772 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:27:27.772 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n3 ]] 00:27:27.772 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:27:27.772 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:27:27.772 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:27:27.772 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:27:27.772 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:27:27.772 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:27:27.772 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:27:27.772 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:27:27.772 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:27:27.772 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:27:27.772 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:27:27.772 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.772 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme1n2 nvme1n2 io_uring -c' 'bdev_xnvme_create /dev/nvme1n3 nvme1n3 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:27.772 nvme0n1 00:27:27.772 nvme1n1 00:27:27.772 nvme1n2 00:27:27.772 nvme1n3 00:27:27.772 nvme2n1 00:27:27.772 nvme3n1 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.772 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.772 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:27:27.772 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.772 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.772 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.772 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:27:27.772 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.772 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:27.772 11:41:33 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.772 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:27:27.773 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "ff529b97-06a3-45e2-b479-d2ddfbbac119"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "ff529b97-06a3-45e2-b479-d2ddfbbac119",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "bc36621b-5721-4d6c-a78d-d18ab0d6399c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bc36621b-5721-4d6c-a78d-d18ab0d6399c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "08decc1b-58a1-42c0-bd14-e6d1ae7e45d4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "08decc1b-58a1-42c0-bd14-e6d1ae7e45d4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "dfdce6f2-801e-48c7-9d48-1ef54eaae09f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dfdce6f2-801e-48c7-9d48-1ef54eaae09f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:27:27.773 ' "519e82ba-648a-4158-a678-37b7366294e6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "519e82ba-648a-4158-a678-37b7366294e6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "a2363292-76c4-4ee2-b5cf-a8cedf83a9d1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a2363292-76c4-4ee2-b5cf-a8cedf83a9d1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:27:28.032 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:27:28.032 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:27:28.032 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:27:28.032 11:41:33 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 73737 00:27:28.032 11:41:33 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73737 ']' 00:27:28.032 11:41:33 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 73737 00:27:28.032 11:41:33 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:27:28.032 11:41:33 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:28.032 11:41:33 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73737 00:27:28.032 killing process with pid 73737 00:27:28.032 11:41:33 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:28.032 11:41:33 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:28.032 11:41:33 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73737' 00:27:28.032 11:41:33 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 73737 00:27:28.032 11:41:33 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 73737 00:27:29.941 11:41:35 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:29.941 11:41:35 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:27:29.941 11:41:35 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:27:29.941 11:41:35 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:29.941 11:41:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:29.941 ************************************ 00:27:29.941 START TEST bdev_hello_world 00:27:29.941 ************************************ 00:27:29.941 11:41:35 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:27:30.199 [2024-11-20 11:41:35.795559] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:27:30.199 [2024-11-20 11:41:35.795728] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74027 ] 00:27:30.458 [2024-11-20 11:41:35.977218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.458 [2024-11-20 11:41:36.084151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.025 [2024-11-20 11:41:36.496006] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:27:31.025 [2024-11-20 11:41:36.496064] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:27:31.025 [2024-11-20 11:41:36.496104] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:27:31.025 [2024-11-20 11:41:36.498461] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:27:31.025 [2024-11-20 11:41:36.498850] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:27:31.025 [2024-11-20 11:41:36.498881] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:27:31.025 [2024-11-20 11:41:36.499299] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:27:31.025 00:27:31.025 [2024-11-20 11:41:36.499339] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:27:31.960 00:27:31.960 ************************************ 00:27:31.960 END TEST bdev_hello_world 00:27:31.960 ************************************ 00:27:31.960 real 0m1.733s 00:27:31.960 user 0m1.348s 00:27:31.960 sys 0m0.266s 00:27:31.960 11:41:37 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:31.960 11:41:37 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:27:31.960 11:41:37 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:27:31.960 11:41:37 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:31.960 11:41:37 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:31.960 11:41:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:31.960 ************************************ 00:27:31.960 START TEST bdev_bounds 00:27:31.960 ************************************ 00:27:31.960 11:41:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:27:31.960 Process bdevio pid: 74069 00:27:31.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:31.960 11:41:37 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74069 00:27:31.960 11:41:37 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:27:31.960 11:41:37 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:27:31.960 11:41:37 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74069' 00:27:31.960 11:41:37 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74069 00:27:31.960 11:41:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74069 ']' 00:27:31.960 11:41:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:31.960 11:41:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:31.960 11:41:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:31.960 11:41:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:31.960 11:41:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:27:31.960 [2024-11-20 11:41:37.586519] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:27:31.960 [2024-11-20 11:41:37.586718] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74069 ] 00:27:32.218 [2024-11-20 11:41:37.767065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:32.218 [2024-11-20 11:41:37.876290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:32.218 [2024-11-20 11:41:37.876406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.218 [2024-11-20 11:41:37.876422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:33.155 11:41:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:33.155 11:41:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:27:33.155 11:41:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:27:33.155 I/O targets: 00:27:33.155 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:27:33.155 nvme1n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:27:33.155 nvme1n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:27:33.155 nvme1n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:27:33.155 nvme2n1: 262144 blocks of 4096 bytes (1024 MiB) 00:27:33.155 nvme3n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:27:33.155 00:27:33.155 00:27:33.155 CUnit - A unit testing framework for C - Version 2.1-3 00:27:33.155 http://cunit.sourceforge.net/ 00:27:33.155 00:27:33.155 00:27:33.155 Suite: bdevio tests on: nvme3n1 00:27:33.155 Test: blockdev write read block ...passed 00:27:33.155 Test: blockdev write zeroes read block ...passed 00:27:33.155 Test: blockdev write zeroes read no split ...passed 00:27:33.155 Test: blockdev write zeroes read split ...passed 00:27:33.155 Test: blockdev write zeroes read split partial ...passed 00:27:33.155 Test: blockdev reset ...passed 00:27:33.155 Test: blockdev write read 8 blocks ...passed 00:27:33.155 Test: blockdev write read size > 128k ...passed 00:27:33.155 Test: blockdev write read invalid size ...passed 00:27:33.155 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:33.155 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:33.155 Test: blockdev write read max offset ...passed 00:27:33.155 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:33.155 Test: blockdev writev readv 8 blocks ...passed 00:27:33.155 Test: blockdev writev readv 30 x 1block ...passed 00:27:33.155 Test: blockdev writev readv block ...passed 00:27:33.155 Test: blockdev writev readv size > 128k ...passed 00:27:33.155 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:33.155 Test: blockdev comparev and writev ...passed 00:27:33.155 Test: blockdev nvme passthru rw ...passed 00:27:33.155 Test: blockdev nvme passthru vendor specific ...passed 00:27:33.155 Test: blockdev nvme admin passthru ...passed 00:27:33.155 Test: blockdev copy ...passed 00:27:33.155 Suite: bdevio tests on: nvme2n1 00:27:33.155 Test: blockdev write read block ...passed 00:27:33.155 Test: blockdev write zeroes read block ...passed 00:27:33.155 Test: blockdev write zeroes read no split ...passed 00:27:33.155 Test: blockdev write zeroes read split ...passed 00:27:33.155 Test: blockdev write zeroes read split partial ...passed 00:27:33.155 Test: blockdev reset ...passed 00:27:33.155 Test: blockdev write read 8 blocks ...passed 00:27:33.155 Test: blockdev write read size > 128k ...passed 00:27:33.155 Test: blockdev write read invalid size ...passed 00:27:33.155 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:33.155 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:33.155 Test: blockdev write read max offset ...passed 00:27:33.155 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:33.155 Test: blockdev writev readv 8 blocks ...passed 00:27:33.155 Test: blockdev writev readv 30 x 1block ...passed 00:27:33.155 Test: blockdev writev readv block ...passed 00:27:33.155 Test: blockdev writev readv size > 128k ...passed 00:27:33.155 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:33.155 Test: blockdev comparev and writev ...passed 00:27:33.155 Test: blockdev nvme passthru rw ...passed 00:27:33.155 Test: blockdev nvme passthru vendor specific ...passed 00:27:33.156 Test: blockdev nvme admin passthru ...passed 00:27:33.156 Test: blockdev copy ...passed 00:27:33.156 Suite: bdevio tests on: nvme1n3 00:27:33.156 Test: blockdev write read block ...passed 00:27:33.156 Test: blockdev write zeroes read block ...passed 00:27:33.156 Test: blockdev write zeroes read no split ...passed 00:27:33.156 Test: blockdev write zeroes read split ...passed 00:27:33.156 Test: blockdev write zeroes read split partial ...passed 00:27:33.156 Test: blockdev reset ...passed 00:27:33.156 Test: blockdev write read 8 blocks ...passed 00:27:33.156 Test: blockdev write read size > 128k ...passed 00:27:33.156 Test: blockdev write read invalid size ...passed 00:27:33.156 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:33.156 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:33.156 Test: blockdev write read max offset ...passed 00:27:33.156 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:33.156 Test: blockdev writev readv 8 blocks ...passed 00:27:33.156 Test: blockdev writev readv 30 x 1block ...passed 00:27:33.156 Test: blockdev writev readv block ...passed 00:27:33.156 Test: blockdev writev readv size > 128k ...passed 00:27:33.156 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:33.156 Test: blockdev comparev and writev ...passed 00:27:33.156 Test: blockdev nvme passthru rw ...passed 00:27:33.156 Test: blockdev nvme passthru vendor specific ...passed 00:27:33.156 Test: blockdev nvme admin passthru ...passed 00:27:33.156 Test: blockdev copy ...passed 00:27:33.156 Suite: bdevio tests on: nvme1n2 00:27:33.156 Test: blockdev write read block ...passed 00:27:33.156 Test: blockdev write zeroes read block ...passed 00:27:33.156 Test: blockdev write zeroes read no split ...passed 00:27:33.156 Test: blockdev write zeroes read split ...passed 00:27:33.156 Test: blockdev write zeroes read split partial ...passed 00:27:33.156 Test: blockdev reset ...passed 00:27:33.156 Test: blockdev write read 8 blocks ...passed 00:27:33.156 Test: blockdev write read size > 128k ...passed 00:27:33.156 Test: blockdev write read invalid size ...passed 00:27:33.156 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:33.156 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:33.156 Test: blockdev write read max offset ...passed 00:27:33.156 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:33.156 Test: blockdev writev readv 8 blocks ...passed 00:27:33.156 Test: blockdev writev readv 30 x 1block ...passed 00:27:33.156 Test: blockdev writev readv block ...passed 00:27:33.156 Test: blockdev writev readv size > 128k ...passed 00:27:33.156 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:33.156 Test: blockdev comparev and writev ...passed 00:27:33.156 Test: blockdev nvme passthru rw ...passed 00:27:33.156 Test: blockdev nvme passthru vendor specific ...passed 00:27:33.156 Test: blockdev nvme admin passthru ...passed 00:27:33.156 Test: blockdev copy ...passed 00:27:33.156 Suite: bdevio tests on: nvme1n1 00:27:33.156 Test: blockdev write read block ...passed 00:27:33.156 Test: blockdev write zeroes read block ...passed 00:27:33.156 Test: blockdev write zeroes read no split ...passed 00:27:33.415 Test: blockdev write zeroes read split ...passed 00:27:33.415 Test: blockdev write zeroes read split partial ...passed 00:27:33.415 Test: blockdev reset ...passed 00:27:33.415 Test: blockdev write read 8 blocks ...passed 00:27:33.415 Test: blockdev write read size > 128k ...passed 00:27:33.415 Test: blockdev write read invalid size ...passed 00:27:33.415 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:33.415 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:33.415 Test: blockdev write read max offset ...passed 00:27:33.415 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:33.415 Test: blockdev writev readv 8 blocks ...passed 00:27:33.415 Test: blockdev writev readv 30 x 1block ...passed 00:27:33.415 Test: blockdev writev readv block ...passed 00:27:33.415 Test: blockdev writev readv size > 128k ...passed 00:27:33.415 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:33.415 Test: blockdev comparev and writev ...passed 00:27:33.415 Test: blockdev nvme passthru rw ...passed 00:27:33.415 Test: blockdev nvme passthru vendor specific ...passed 00:27:33.415 Test: blockdev nvme admin passthru ...passed 00:27:33.415 Test: blockdev copy ...passed 00:27:33.415 Suite: bdevio tests on: nvme0n1 00:27:33.415 Test: blockdev write read block ...passed 00:27:33.415 Test: blockdev write zeroes read block ...passed 00:27:33.415 Test: blockdev write zeroes read no split ...passed 00:27:33.415 Test: blockdev write zeroes read split ...passed 00:27:33.415 Test: blockdev write zeroes read split partial ...passed 00:27:33.415 Test: blockdev reset ...passed 00:27:33.415 Test: blockdev write read 8 blocks ...passed 00:27:33.415 Test: blockdev write read size > 128k ...passed 00:27:33.415 Test: blockdev write read invalid size ...passed 00:27:33.415 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:33.415 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:33.415 Test: blockdev write read max offset ...passed 00:27:33.415 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:33.415 Test: blockdev writev readv 8 blocks ...passed 00:27:33.415 Test: blockdev writev readv 30 x 1block ...passed 00:27:33.415 Test: blockdev writev readv block ...passed 00:27:33.415 Test: blockdev writev readv size > 128k ...passed 00:27:33.415 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:33.415 Test: blockdev comparev and writev ...passed 00:27:33.415 Test: blockdev nvme passthru rw ...passed 00:27:33.415 Test: blockdev nvme passthru vendor specific ...passed 00:27:33.415 Test: blockdev nvme admin passthru ...passed 00:27:33.415 Test: blockdev copy ...passed 00:27:33.415 00:27:33.415 Run Summary: Type Total Ran Passed Failed Inactive 00:27:33.415 suites 6 6 n/a 0 0 00:27:33.415 tests 138 138 138 0 0 00:27:33.415 asserts 780 780 780 0 n/a 00:27:33.415 00:27:33.415 Elapsed time = 1.007 seconds 00:27:33.415 0 00:27:33.415 11:41:39 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74069 00:27:33.415 11:41:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74069 ']' 00:27:33.415 11:41:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74069 00:27:33.415 11:41:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:27:33.415 11:41:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:33.415 11:41:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74069 00:27:33.415 killing process with pid 74069 00:27:33.415 11:41:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:33.415 11:41:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:33.415 11:41:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74069' 00:27:33.415 11:41:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74069 00:27:33.415 11:41:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74069 00:27:34.348 ************************************ 00:27:34.348 END TEST bdev_bounds 00:27:34.348 ************************************ 00:27:34.348 11:41:40 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:27:34.348 00:27:34.348 real 0m2.584s 00:27:34.348 user 0m6.458s 00:27:34.348 sys 0m0.425s 00:27:34.348 11:41:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:34.348 11:41:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:27:34.348 11:41:40 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:27:34.348 11:41:40 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:27:34.348 11:41:40 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:34.348 11:41:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:34.607 ************************************ 00:27:34.607 START TEST bdev_nbd 00:27:34.607 ************************************ 00:27:34.607 11:41:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:27:34.607 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:27:34.607 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:27:34.607 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:34.607 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:34.607 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:27:34.607 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:27:34.607 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:27:34.607 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:27:34.607 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:27:34.607 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:27:34.607 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:27:34.607 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:27:34.607 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:27:34.607 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:27:34.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:27:34.607 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:27:34.607 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74124 00:27:34.607 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:27:34.607 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74124 /var/tmp/spdk-nbd.sock 00:27:34.607 11:41:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:27:34.608 11:41:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74124 ']' 00:27:34.608 11:41:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:27:34.608 11:41:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:34.608 11:41:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:27:34.608 11:41:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:34.608 11:41:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:27:34.608 [2024-11-20 11:41:40.247596] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:27:34.608 [2024-11-20 11:41:40.247802] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:34.866 [2024-11-20 11:41:40.434998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.866 [2024-11-20 11:41:40.569503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.434 11:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:35.434 11:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:27:35.434 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:27:35.434 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:35.434 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:27:35.434 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:27:35.434 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:27:35.434 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:35.435 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:27:35.435 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:27:35.435 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:27:35.435 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:27:35.435 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:27:35.435 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:27:35.435 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:27:36.003 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:27:36.003 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:27:36.003 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:27:36.003 11:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:27:36.003 11:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:27:36.003 11:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:36.003 11:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:36.003 11:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:27:36.003 11:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:27:36.003 11:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:36.003 11:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:36.003 11:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:36.003 1+0 records in 00:27:36.003 1+0 records out 00:27:36.003 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519562 s, 7.9 MB/s 00:27:36.003 11:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:36.003 11:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:27:36.003 11:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:36.003 11:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:36.003 11:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:27:36.003 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:36.003 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:27:36.003 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:27:36.263 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:27:36.263 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:27:36.263 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:27:36.263 11:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:27:36.263 11:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:27:36.263 11:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:36.263 11:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:36.263 11:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:27:36.263 11:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:27:36.263 11:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:36.263 11:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:36.263 11:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:36.263 1+0 records in 00:27:36.263 1+0 records out 00:27:36.263 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000500514 s, 8.2 MB/s 00:27:36.263 11:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:36.263 11:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:27:36.263 11:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:36.263 11:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:36.263 11:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:27:36.263 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:36.263 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:27:36.263 11:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 00:27:36.522 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:27:36.522 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:27:36.522 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:27:36.522 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:27:36.522 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:27:36.522 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:36.522 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:36.522 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:27:36.522 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:27:36.522 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:36.522 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:36.522 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:36.522 1+0 records in 00:27:36.522 1+0 records out 00:27:36.522 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000586648 s, 7.0 MB/s 00:27:36.522 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:36.522 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:27:36.522 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:36.522 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:36.522 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:27:36.522 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:36.522 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:27:36.522 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 00:27:36.781 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:27:36.781 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:27:36.781 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:27:36.781 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:27:36.781 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:27:36.781 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:36.781 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:36.781 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:27:36.781 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:27:36.781 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:36.781 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:36.781 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:36.781 1+0 records in 00:27:36.781 1+0 records out 00:27:36.781 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00077087 s, 5.3 MB/s 00:27:36.781 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:36.781 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:27:36.781 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:36.781 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:36.781 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:27:36.781 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:36.781 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:27:36.781 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:27:37.349 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:27:37.349 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:27:37.349 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:27:37.349 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:27:37.349 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:27:37.349 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:37.349 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:37.349 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:27:37.349 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:27:37.349 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:37.349 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:37.349 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:37.349 1+0 records in 00:27:37.349 1+0 records out 00:27:37.349 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00166576 s, 2.5 MB/s 00:27:37.349 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:37.349 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:27:37.349 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:37.349 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:37.349 11:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:27:37.349 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:37.349 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:27:37.349 11:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:27:37.607 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:27:37.607 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:27:37.607 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:27:37.607 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:27:37.607 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:27:37.607 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:37.607 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:37.607 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:27:37.607 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:27:37.607 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:37.607 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:37.607 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:37.607 1+0 records in 00:27:37.607 1+0 records out 00:27:37.607 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000853705 s, 4.8 MB/s 00:27:37.607 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:37.607 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:27:37.607 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:37.607 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:37.607 11:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:27:37.607 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:37.607 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:27:37.607 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:37.865 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:27:37.865 { 00:27:37.865 "nbd_device": "/dev/nbd0", 00:27:37.865 "bdev_name": "nvme0n1" 00:27:37.865 }, 00:27:37.865 { 00:27:37.865 "nbd_device": "/dev/nbd1", 00:27:37.865 "bdev_name": "nvme1n1" 00:27:37.865 }, 00:27:37.865 { 00:27:37.865 "nbd_device": "/dev/nbd2", 00:27:37.865 "bdev_name": "nvme1n2" 00:27:37.865 }, 00:27:37.865 { 00:27:37.865 "nbd_device": "/dev/nbd3", 00:27:37.865 "bdev_name": "nvme1n3" 00:27:37.865 }, 00:27:37.865 { 00:27:37.865 "nbd_device": "/dev/nbd4", 00:27:37.865 "bdev_name": "nvme2n1" 00:27:37.865 }, 00:27:37.865 { 00:27:37.865 "nbd_device": "/dev/nbd5", 00:27:37.865 "bdev_name": "nvme3n1" 00:27:37.865 } 00:27:37.865 ]' 00:27:37.865 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:27:37.865 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:27:37.865 { 00:27:37.865 "nbd_device": "/dev/nbd0", 00:27:37.865 "bdev_name": "nvme0n1" 00:27:37.865 }, 00:27:37.865 { 00:27:37.865 "nbd_device": "/dev/nbd1", 00:27:37.865 "bdev_name": "nvme1n1" 00:27:37.865 }, 00:27:37.865 { 00:27:37.865 "nbd_device": "/dev/nbd2", 00:27:37.865 "bdev_name": "nvme1n2" 00:27:37.865 }, 00:27:37.865 { 00:27:37.865 "nbd_device": "/dev/nbd3", 00:27:37.865 "bdev_name": "nvme1n3" 00:27:37.865 }, 00:27:37.865 { 00:27:37.865 "nbd_device": "/dev/nbd4", 00:27:37.865 "bdev_name": "nvme2n1" 00:27:37.865 }, 00:27:37.865 { 00:27:37.865 "nbd_device": "/dev/nbd5", 00:27:37.865 "bdev_name": "nvme3n1" 00:27:37.865 } 00:27:37.865 ]' 00:27:37.865 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:27:37.865 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:27:37.865 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:37.865 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:27:37.865 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:37.865 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:27:37.865 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:37.865 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:38.121 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:38.121 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:38.121 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:38.121 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:38.121 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:38.122 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:38.122 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:38.122 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:38.122 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:38.122 11:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:27:38.415 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:38.415 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:38.415 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:38.415 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:38.415 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:38.415 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:38.415 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:38.415 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:38.415 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:38.415 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:27:38.672 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:27:38.672 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:27:38.672 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:27:38.672 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:38.672 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:38.672 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:27:38.672 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:38.672 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:38.672 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:38.672 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:27:38.930 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:27:38.930 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:27:38.930 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:27:38.930 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:38.930 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:38.930 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:27:38.930 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:38.930 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:38.930 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:38.930 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:27:39.214 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:27:39.214 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:27:39.214 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:27:39.214 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:39.214 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:39.214 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:27:39.214 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:39.214 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:39.214 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:39.214 11:41:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:27:39.536 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:27:39.536 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:27:39.536 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:27:39.536 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:39.536 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:39.536 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:27:39.536 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:39.536 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:39.536 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:39.536 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:39.536 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:39.794 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:39.794 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:39.794 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:40.053 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:40.053 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:27:40.053 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:40.053 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:27:40.053 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:27:40.053 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:27:40.053 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:27:40.053 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:27:40.053 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:27:40.053 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:27:40.053 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:40.053 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:27:40.053 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:27:40.053 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:27:40.053 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:27:40.053 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:27:40.053 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:40.053 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:27:40.053 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:40.053 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:27:40.053 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:40.053 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:27:40.053 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:40.053 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:27:40.053 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:27:40.311 /dev/nbd0 00:27:40.311 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:40.311 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:40.312 11:41:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:27:40.312 11:41:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:27:40.312 11:41:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:40.312 11:41:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:40.312 11:41:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:27:40.312 11:41:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:27:40.312 11:41:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:40.312 11:41:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:40.312 11:41:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:40.312 1+0 records in 00:27:40.312 1+0 records out 00:27:40.312 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000633318 s, 6.5 MB/s 00:27:40.312 11:41:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:40.312 11:41:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:27:40.312 11:41:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:40.312 11:41:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:40.312 11:41:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:27:40.312 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:40.312 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:27:40.312 11:41:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:27:40.571 /dev/nbd1 00:27:40.571 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:40.571 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:40.571 11:41:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:27:40.571 11:41:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:27:40.571 11:41:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:40.571 11:41:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:40.571 11:41:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:27:40.571 11:41:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:27:40.571 11:41:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:40.571 11:41:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:40.571 11:41:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:40.571 1+0 records in 00:27:40.571 1+0 records out 00:27:40.571 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000627208 s, 6.5 MB/s 00:27:40.571 11:41:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:40.571 11:41:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:27:40.571 11:41:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:40.571 11:41:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:40.571 11:41:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:27:40.571 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:40.571 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:27:40.571 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 /dev/nbd10 00:27:41.138 /dev/nbd10 00:27:41.138 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:27:41.138 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:27:41.138 11:41:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:27:41.138 11:41:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:27:41.138 11:41:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:41.138 11:41:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:41.138 11:41:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:27:41.138 11:41:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:27:41.138 11:41:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:41.138 11:41:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:41.138 11:41:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:41.138 1+0 records in 00:27:41.138 1+0 records out 00:27:41.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536316 s, 7.6 MB/s 00:27:41.138 11:41:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:41.138 11:41:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:27:41.138 11:41:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:41.138 11:41:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:41.138 11:41:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:27:41.138 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:41.138 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:27:41.138 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 /dev/nbd11 00:27:41.397 /dev/nbd11 00:27:41.397 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:27:41.397 11:41:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:27:41.397 11:41:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:27:41.397 11:41:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:27:41.397 11:41:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:41.397 11:41:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:41.397 11:41:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:27:41.397 11:41:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:27:41.397 11:41:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:41.397 11:41:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:41.397 11:41:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:41.397 1+0 records in 00:27:41.397 1+0 records out 00:27:41.397 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000842491 s, 4.9 MB/s 00:27:41.398 11:41:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:41.398 11:41:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:27:41.398 11:41:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:41.398 11:41:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:41.398 11:41:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:27:41.398 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:41.398 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:27:41.398 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:27:41.657 /dev/nbd12 00:27:41.657 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:27:41.657 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:27:41.657 11:41:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:27:41.657 11:41:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:27:41.657 11:41:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:41.657 11:41:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:41.657 11:41:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:27:41.657 11:41:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:27:41.657 11:41:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:41.657 11:41:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:41.657 11:41:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:41.657 1+0 records in 00:27:41.657 1+0 records out 00:27:41.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00066049 s, 6.2 MB/s 00:27:41.657 11:41:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:41.657 11:41:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:27:41.657 11:41:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:41.657 11:41:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:41.657 11:41:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:27:41.657 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:41.657 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:27:41.657 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:27:41.916 /dev/nbd13 00:27:41.916 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:27:41.916 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:27:41.916 11:41:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:27:41.916 11:41:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:27:41.916 11:41:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:41.916 11:41:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:41.916 11:41:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:27:41.916 11:41:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:27:41.916 11:41:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:41.916 11:41:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:41.917 11:41:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:41.917 1+0 records in 00:27:41.917 1+0 records out 00:27:41.917 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000628493 s, 6.5 MB/s 00:27:41.917 11:41:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:41.917 11:41:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:27:41.917 11:41:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:41.917 11:41:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:41.917 11:41:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:27:41.917 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:41.917 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:27:41.917 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:41.917 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:41.917 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:42.483 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:27:42.483 { 00:27:42.483 "nbd_device": "/dev/nbd0", 00:27:42.483 "bdev_name": "nvme0n1" 00:27:42.483 }, 00:27:42.483 { 00:27:42.483 "nbd_device": "/dev/nbd1", 00:27:42.483 "bdev_name": "nvme1n1" 00:27:42.483 }, 00:27:42.483 { 00:27:42.483 "nbd_device": "/dev/nbd10", 00:27:42.483 "bdev_name": "nvme1n2" 00:27:42.483 }, 00:27:42.483 { 00:27:42.484 "nbd_device": "/dev/nbd11", 00:27:42.484 "bdev_name": "nvme1n3" 00:27:42.484 }, 00:27:42.484 { 00:27:42.484 "nbd_device": "/dev/nbd12", 00:27:42.484 "bdev_name": "nvme2n1" 00:27:42.484 }, 00:27:42.484 { 00:27:42.484 "nbd_device": "/dev/nbd13", 00:27:42.484 "bdev_name": "nvme3n1" 00:27:42.484 } 00:27:42.484 ]' 00:27:42.484 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:27:42.484 { 00:27:42.484 "nbd_device": "/dev/nbd0", 00:27:42.484 "bdev_name": "nvme0n1" 00:27:42.484 }, 00:27:42.484 { 00:27:42.484 "nbd_device": "/dev/nbd1", 00:27:42.484 "bdev_name": "nvme1n1" 00:27:42.484 }, 00:27:42.484 { 00:27:42.484 "nbd_device": "/dev/nbd10", 00:27:42.484 "bdev_name": "nvme1n2" 00:27:42.484 }, 00:27:42.484 { 00:27:42.484 "nbd_device": "/dev/nbd11", 00:27:42.484 "bdev_name": "nvme1n3" 00:27:42.484 }, 00:27:42.484 { 00:27:42.484 "nbd_device": "/dev/nbd12", 00:27:42.484 "bdev_name": "nvme2n1" 00:27:42.484 }, 00:27:42.484 { 00:27:42.484 "nbd_device": "/dev/nbd13", 00:27:42.484 "bdev_name": "nvme3n1" 00:27:42.484 } 00:27:42.484 ]' 00:27:42.484 11:41:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:42.484 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:27:42.484 /dev/nbd1 00:27:42.484 /dev/nbd10 00:27:42.484 /dev/nbd11 00:27:42.484 /dev/nbd12 00:27:42.484 /dev/nbd13' 00:27:42.484 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:27:42.484 /dev/nbd1 00:27:42.484 /dev/nbd10 00:27:42.484 /dev/nbd11 00:27:42.484 /dev/nbd12 00:27:42.484 /dev/nbd13' 00:27:42.484 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:42.484 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:27:42.484 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:27:42.484 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:27:42.484 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:27:42.484 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:27:42.484 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:27:42.484 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:42.484 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:27:42.484 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:42.484 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:27:42.484 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:27:42.484 256+0 records in 00:27:42.484 256+0 records out 00:27:42.484 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00961569 s, 109 MB/s 00:27:42.484 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:42.484 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:27:42.484 256+0 records in 00:27:42.484 256+0 records out 00:27:42.484 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137713 s, 7.6 MB/s 00:27:42.484 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:42.484 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:27:42.742 256+0 records in 00:27:42.742 256+0 records out 00:27:42.742 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155878 s, 6.7 MB/s 00:27:42.742 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:42.742 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:27:42.742 256+0 records in 00:27:42.742 256+0 records out 00:27:42.742 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143105 s, 7.3 MB/s 00:27:42.742 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:42.742 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:27:43.000 256+0 records in 00:27:43.000 256+0 records out 00:27:43.000 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131593 s, 8.0 MB/s 00:27:43.000 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:43.000 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:27:43.258 256+0 records in 00:27:43.258 256+0 records out 00:27:43.258 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153994 s, 6.8 MB/s 00:27:43.258 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:43.258 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:27:43.258 256+0 records in 00:27:43.258 256+0 records out 00:27:43.258 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.161731 s, 6.5 MB/s 00:27:43.258 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:27:43.258 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:27:43.258 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:43.258 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:27:43.258 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:43.258 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:27:43.258 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:27:43.258 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:43.258 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:27:43.258 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:43.258 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:27:43.258 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:43.258 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:27:43.258 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:43.258 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:27:43.258 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:43.258 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:27:43.258 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:43.258 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:27:43.258 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:43.258 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:27:43.258 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:43.258 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:27:43.258 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:43.258 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:27:43.258 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:43.258 11:41:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:43.846 11:41:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:43.846 11:41:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:43.846 11:41:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:43.846 11:41:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:43.846 11:41:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:43.846 11:41:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:43.846 11:41:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:43.846 11:41:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:43.846 11:41:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:43.846 11:41:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:27:43.846 11:41:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:44.106 11:41:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:44.106 11:41:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:44.106 11:41:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:44.106 11:41:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:44.106 11:41:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:44.106 11:41:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:44.106 11:41:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:44.106 11:41:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:44.106 11:41:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:27:44.364 11:41:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:27:44.364 11:41:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:27:44.364 11:41:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:27:44.364 11:41:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:44.365 11:41:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:44.365 11:41:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:27:44.365 11:41:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:44.365 11:41:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:44.365 11:41:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:44.365 11:41:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:27:44.623 11:41:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:27:44.623 11:41:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:27:44.623 11:41:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:27:44.623 11:41:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:44.623 11:41:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:44.623 11:41:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:27:44.623 11:41:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:44.623 11:41:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:44.623 11:41:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:44.623 11:41:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:27:44.882 11:41:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:27:44.882 11:41:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:27:44.882 11:41:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:27:44.882 11:41:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:44.882 11:41:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:44.882 11:41:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:27:44.882 11:41:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:44.882 11:41:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:44.882 11:41:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:44.882 11:41:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:27:45.140 11:41:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:27:45.140 11:41:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:27:45.141 11:41:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:27:45.141 11:41:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:45.141 11:41:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:45.141 11:41:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:27:45.399 11:41:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:45.399 11:41:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:45.399 11:41:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:45.399 11:41:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:45.399 11:41:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:45.658 11:41:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:45.658 11:41:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:45.658 11:41:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:45.658 11:41:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:45.658 11:41:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:27:45.658 11:41:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:45.658 11:41:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:27:45.658 11:41:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:27:45.658 11:41:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:27:45.658 11:41:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:27:45.658 11:41:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:27:45.658 11:41:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:27:45.658 11:41:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:45.658 11:41:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:45.658 11:41:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:27:45.658 11:41:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:27:45.917 malloc_lvol_verify 00:27:45.917 11:41:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:27:46.175 edb87a34-a386-41b6-b4fa-df2dc5cc9ee6 00:27:46.175 11:41:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:27:46.433 8edd3994-1a33-40ed-b268-a0f0b109d1e0 00:27:46.433 11:41:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:27:46.691 /dev/nbd0 00:27:46.949 11:41:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:27:46.949 11:41:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:27:46.950 11:41:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:27:46.950 11:41:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:27:46.950 11:41:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:27:46.950 mke2fs 1.47.0 (5-Feb-2023) 00:27:46.950 Discarding device blocks: 0/4096 done 00:27:46.950 Creating filesystem with 4096 1k blocks and 1024 inodes 00:27:46.950 00:27:46.950 Allocating group tables: 0/1 done 00:27:46.950 Writing inode tables: 0/1 done 00:27:46.950 Creating journal (1024 blocks): done 00:27:46.950 Writing superblocks and filesystem accounting information: 0/1 done 00:27:46.950 00:27:46.950 11:41:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:46.950 11:41:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:46.950 11:41:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:46.950 11:41:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:46.950 11:41:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:27:46.950 11:41:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:46.950 11:41:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:47.208 11:41:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:47.208 11:41:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:47.208 11:41:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:47.208 11:41:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:47.208 11:41:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:47.208 11:41:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:47.208 11:41:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:47.208 11:41:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:47.208 11:41:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74124 00:27:47.208 11:41:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74124 ']' 00:27:47.208 11:41:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74124 00:27:47.208 11:41:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:27:47.208 11:41:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:47.208 11:41:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74124 00:27:47.208 11:41:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:47.208 11:41:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:47.208 11:41:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74124' 00:27:47.208 killing process with pid 74124 00:27:47.208 11:41:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74124 00:27:47.208 11:41:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74124 00:27:48.583 11:41:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:27:48.583 00:27:48.583 real 0m13.884s 00:27:48.583 user 0m19.758s 00:27:48.583 sys 0m4.604s 00:27:48.583 11:41:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:48.583 11:41:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:27:48.583 ************************************ 00:27:48.583 END TEST bdev_nbd 00:27:48.583 ************************************ 00:27:48.583 11:41:54 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:27:48.583 11:41:54 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:27:48.583 11:41:54 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:27:48.583 11:41:54 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:27:48.583 11:41:54 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:48.583 11:41:54 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:48.583 11:41:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:48.583 ************************************ 00:27:48.583 START TEST bdev_fio 00:27:48.583 ************************************ 00:27:48.583 11:41:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:27:48.583 11:41:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:27:48.583 11:41:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:27:48.583 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:27:48.583 11:41:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:27:48.583 11:41:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:27:48.583 11:41:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:27:48.583 11:41:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:27:48.583 11:41:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:27:48.583 11:41:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:27:48.583 11:41:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:27:48.583 11:41:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:27:48.583 11:41:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:27:48.583 11:41:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:27:48.583 11:41:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:27:48.583 11:41:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:27:48.583 11:41:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:27:48.583 11:41:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:27:48.583 11:41:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:27:48.583 11:41:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:27:48.583 11:41:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:27:48.583 11:41:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:27:48.583 11:41:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:27:48.583 11:41:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:27:48.583 11:41:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:27:48.583 11:41:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:27:48.583 11:41:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:27:48.583 11:41:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:27:48.583 11:41:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:27:48.583 11:41:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:27:48.583 11:41:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:27:48.583 11:41:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:27:48.583 11:41:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n2]' 00:27:48.583 11:41:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n2 00:27:48.583 11:41:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:27:48.583 11:41:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n3]' 00:27:48.584 11:41:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n3 00:27:48.584 11:41:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:27:48.584 11:41:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:27:48.584 11:41:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:27:48.584 11:41:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:27:48.584 11:41:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:27:48.584 11:41:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:27:48.584 11:41:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:27:48.584 11:41:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:27:48.584 11:41:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:27:48.584 11:41:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:48.584 11:41:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:27:48.584 ************************************ 00:27:48.584 START TEST bdev_fio_rw_verify 00:27:48.584 ************************************ 00:27:48.584 11:41:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:27:48.584 11:41:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:27:48.584 11:41:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:48.584 11:41:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:48.584 11:41:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:48.584 11:41:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:48.584 11:41:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:27:48.584 11:41:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:48.584 11:41:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:48.584 11:41:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:48.584 11:41:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:27:48.584 11:41:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:48.584 11:41:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:48.584 11:41:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:48.584 11:41:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:27:48.584 11:41:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:48.584 11:41:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:27:48.842 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:48.842 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:48.843 job_nvme1n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:48.843 job_nvme1n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:48.843 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:48.843 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:48.843 fio-3.35 00:27:48.843 Starting 6 threads 00:28:01.056 00:28:01.056 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74565: Wed Nov 20 11:42:05 2024 00:28:01.056 read: IOPS=27.1k, BW=106MiB/s (111MB/s)(1058MiB/10001msec) 00:28:01.056 slat (usec): min=2, max=1078, avg= 7.82, stdev= 6.13 00:28:01.056 clat (usec): min=114, max=22192, avg=681.84, stdev=261.70 00:28:01.056 lat (usec): min=120, max=22205, avg=689.66, stdev=262.61 00:28:01.056 clat percentiles (usec): 00:28:01.056 | 50.000th=[ 701], 99.000th=[ 1319], 99.900th=[ 1876], 99.990th=[ 3884], 00:28:01.056 | 99.999th=[ 7898] 00:28:01.056 write: IOPS=27.4k, BW=107MiB/s (112MB/s)(1072MiB/10001msec); 0 zone resets 00:28:01.056 slat (usec): min=12, max=2219, avg=29.01, stdev=30.02 00:28:01.056 clat (usec): min=101, max=30463, avg=775.34, stdev=372.23 00:28:01.056 lat (usec): min=118, max=30496, avg=804.34, stdev=374.33 00:28:01.056 clat percentiles (usec): 00:28:01.056 | 50.000th=[ 783], 99.000th=[ 1450], 99.900th=[ 1975], 99.990th=[20055], 00:28:01.056 | 99.999th=[30278] 00:28:01.056 bw ( KiB/s): min=91095, max=134305, per=100.00%, avg=109877.58, stdev=2141.67, samples=114 00:28:01.056 iops : min=22773, max=33576, avg=27469.16, stdev=535.42, samples=114 00:28:01.056 lat (usec) : 250=2.56%, 500=17.39%, 750=31.69%, 1000=36.21% 00:28:01.056 lat (msec) : 2=12.06%, 4=0.08%, 10=0.01%, 50=0.01% 00:28:01.056 cpu : usr=59.11%, sys=27.12%, ctx=7877, majf=0, minf=23395 00:28:01.056 IO depths : 1=11.9%, 2=24.4%, 4=50.6%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:01.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.056 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.056 issued rwts: total=270920,274367,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.056 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:01.056 00:28:01.056 Run status group 0 (all jobs): 00:28:01.056 READ: bw=106MiB/s (111MB/s), 106MiB/s-106MiB/s (111MB/s-111MB/s), io=1058MiB (1110MB), run=10001-10001msec 00:28:01.056 WRITE: bw=107MiB/s (112MB/s), 107MiB/s-107MiB/s (112MB/s-112MB/s), io=1072MiB (1124MB), run=10001-10001msec 00:28:01.056 ----------------------------------------------------- 00:28:01.056 Suppressions used: 00:28:01.056 count bytes template 00:28:01.056 6 48 /usr/src/fio/parse.c 00:28:01.056 3239 310944 /usr/src/fio/iolog.c 00:28:01.056 1 8 libtcmalloc_minimal.so 00:28:01.056 1 904 libcrypto.so 00:28:01.056 ----------------------------------------------------- 00:28:01.056 00:28:01.056 00:28:01.056 real 0m12.658s 00:28:01.056 user 0m37.560s 00:28:01.056 sys 0m16.714s 00:28:01.056 11:42:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:01.056 11:42:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:28:01.056 ************************************ 00:28:01.056 END TEST bdev_fio_rw_verify 00:28:01.056 ************************************ 00:28:01.315 11:42:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:28:01.315 11:42:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:01.315 11:42:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:28:01.315 11:42:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:01.315 11:42:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:28:01.315 11:42:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:28:01.315 11:42:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:28:01.315 11:42:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:28:01.315 11:42:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:28:01.315 11:42:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:28:01.315 11:42:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:28:01.315 11:42:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:01.315 11:42:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:28:01.315 11:42:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:28:01.315 11:42:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:28:01.315 11:42:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:28:01.315 11:42:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:28:01.315 11:42:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "ff529b97-06a3-45e2-b479-d2ddfbbac119"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "ff529b97-06a3-45e2-b479-d2ddfbbac119",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "bc36621b-5721-4d6c-a78d-d18ab0d6399c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bc36621b-5721-4d6c-a78d-d18ab0d6399c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "08decc1b-58a1-42c0-bd14-e6d1ae7e45d4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "08decc1b-58a1-42c0-bd14-e6d1ae7e45d4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "dfdce6f2-801e-48c7-9d48-1ef54eaae09f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dfdce6f2-801e-48c7-9d48-1ef54eaae09f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "519e82ba-648a-4158-a678-37b7366294e6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "519e82ba-648a-4158-a678-37b7366294e6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "a2363292-76c4-4ee2-b5cf-a8cedf83a9d1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a2363292-76c4-4ee2-b5cf-a8cedf83a9d1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:28:01.315 11:42:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:28:01.315 11:42:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:01.315 11:42:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:28:01.315 /home/vagrant/spdk_repo/spdk 00:28:01.315 11:42:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:28:01.315 11:42:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:28:01.315 00:28:01.315 real 0m12.868s 00:28:01.315 user 0m37.672s 00:28:01.315 sys 0m16.803s 00:28:01.315 11:42:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:01.316 11:42:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:28:01.316 ************************************ 00:28:01.316 END TEST bdev_fio 00:28:01.316 ************************************ 00:28:01.316 11:42:06 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:01.316 11:42:06 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:01.316 11:42:06 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:28:01.316 11:42:06 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:01.316 11:42:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:28:01.316 ************************************ 00:28:01.316 START TEST bdev_verify 00:28:01.316 ************************************ 00:28:01.316 11:42:06 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:01.574 [2024-11-20 11:42:07.086396] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:28:01.574 [2024-11-20 11:42:07.086592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74735 ] 00:28:01.574 [2024-11-20 11:42:07.271451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:01.833 [2024-11-20 11:42:07.428469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.833 [2024-11-20 11:42:07.428490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:02.401 Running I/O for 5 seconds... 00:28:04.374 22337.00 IOPS, 87.25 MiB/s [2024-11-20T11:42:11.515Z] 22015.50 IOPS, 86.00 MiB/s [2024-11-20T11:42:12.450Z] 22016.33 IOPS, 86.00 MiB/s [2024-11-20T11:42:13.384Z] 22271.50 IOPS, 87.00 MiB/s [2024-11-20T11:42:13.384Z] 22188.00 IOPS, 86.67 MiB/s 00:28:07.618 Latency(us) 00:28:07.618 [2024-11-20T11:42:13.384Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:07.618 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:07.618 Verification LBA range: start 0x0 length 0xa0000 00:28:07.618 nvme0n1 : 5.02 1681.96 6.57 0.00 0.00 75960.43 7745.16 72923.69 00:28:07.618 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:07.618 Verification LBA range: start 0xa0000 length 0xa0000 00:28:07.618 nvme0n1 : 5.03 1553.55 6.07 0.00 0.00 82228.57 14775.39 71017.19 00:28:07.618 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:07.618 Verification LBA range: start 0x0 length 0x80000 00:28:07.618 nvme1n1 : 5.03 1678.52 6.56 0.00 0.00 75960.88 9353.77 72923.69 00:28:07.618 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:07.618 Verification LBA range: start 0x80000 length 0x80000 00:28:07.618 nvme1n1 : 5.06 1542.85 6.03 0.00 0.00 82631.17 22043.93 63391.19 00:28:07.618 Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:07.619 Verification LBA range: start 0x0 length 0x80000 00:28:07.619 nvme1n2 : 5.05 1673.92 6.54 0.00 0.00 76016.89 20256.58 60293.12 00:28:07.619 Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:07.619 Verification LBA range: start 0x80000 length 0x80000 00:28:07.619 nvme1n2 : 5.04 1548.89 6.05 0.00 0.00 82133.55 11081.54 65297.69 00:28:07.619 Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:07.619 Verification LBA range: start 0x0 length 0x80000 00:28:07.619 nvme1n3 : 5.05 1673.39 6.54 0.00 0.00 75898.89 16443.58 70540.57 00:28:07.619 Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:07.619 Verification LBA range: start 0x80000 length 0x80000 00:28:07.619 nvme1n3 : 5.09 1560.18 6.09 0.00 0.00 81385.35 13107.20 62914.56 00:28:07.619 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:07.619 Verification LBA range: start 0x0 length 0x20000 00:28:07.619 nvme2n1 : 5.08 1689.83 6.60 0.00 0.00 75017.41 13881.72 66727.56 00:28:07.619 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:07.619 Verification LBA range: start 0x20000 length 0x20000 00:28:07.619 nvme2n1 : 5.06 1542.12 6.02 0.00 0.00 82167.75 15371.17 66727.56 00:28:07.619 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:07.619 Verification LBA range: start 0x0 length 0xbd0bd 00:28:07.619 nvme3n1 : 5.08 3004.52 11.74 0.00 0.00 42079.21 4736.47 58624.93 00:28:07.619 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:07.619 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:28:07.619 nvme3n1 : 5.08 2782.90 10.87 0.00 0.00 45375.91 3366.17 60769.75 00:28:07.619 [2024-11-20T11:42:13.385Z] =================================================================================================================== 00:28:07.619 [2024-11-20T11:42:13.385Z] Total : 21932.63 85.67 0.00 0.00 69497.45 3366.17 72923.69 00:28:08.554 00:28:08.554 real 0m7.205s 00:28:08.554 user 0m11.248s 00:28:08.554 sys 0m1.874s 00:28:08.554 11:42:14 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:08.554 ************************************ 00:28:08.554 END TEST bdev_verify 00:28:08.554 ************************************ 00:28:08.554 11:42:14 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:28:08.554 11:42:14 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:08.554 11:42:14 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:28:08.555 11:42:14 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:08.555 11:42:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:28:08.555 ************************************ 00:28:08.555 START TEST bdev_verify_big_io 00:28:08.555 ************************************ 00:28:08.555 11:42:14 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:08.815 [2024-11-20 11:42:14.353454] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:28:08.815 [2024-11-20 11:42:14.353653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74844 ] 00:28:08.815 [2024-11-20 11:42:14.545411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:09.073 [2024-11-20 11:42:14.703801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.073 [2024-11-20 11:42:14.703808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.639 Running I/O for 5 seconds... 00:28:15.731 1880.00 IOPS, 117.50 MiB/s [2024-11-20T11:42:21.497Z] 3152.00 IOPS, 197.00 MiB/s 00:28:15.731 Latency(us) 00:28:15.731 [2024-11-20T11:42:21.497Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.731 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:15.731 Verification LBA range: start 0x0 length 0xa000 00:28:15.731 nvme0n1 : 5.82 102.29 6.39 0.00 0.00 1206542.49 65297.69 1692973.61 00:28:15.731 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:15.731 Verification LBA range: start 0xa000 length 0xa000 00:28:15.731 nvme0n1 : 5.82 120.89 7.56 0.00 0.00 1032694.35 18707.55 1464193.40 00:28:15.731 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:15.731 Verification LBA range: start 0x0 length 0x8000 00:28:15.731 nvme1n1 : 5.92 140.55 8.78 0.00 0.00 847480.45 87222.46 1372681.31 00:28:15.731 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:15.731 Verification LBA range: start 0x8000 length 0x8000 00:28:15.732 nvme1n1 : 5.83 113.97 7.12 0.00 0.00 1056101.42 15728.64 941811.90 00:28:15.732 Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:15.732 Verification LBA range: start 0x0 length 0x8000 00:28:15.732 nvme1n2 : 5.82 129.21 8.08 0.00 0.00 906762.66 49330.73 2303054.20 00:28:15.732 Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:15.732 Verification LBA range: start 0x8000 length 0x8000 00:28:15.732 nvme1n2 : 5.84 139.77 8.74 0.00 0.00 832788.57 7685.59 1349803.29 00:28:15.732 Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:15.732 Verification LBA range: start 0x0 length 0x8000 00:28:15.732 nvme1n3 : 5.92 144.53 9.03 0.00 0.00 779140.01 14000.87 1441315.37 00:28:15.732 Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:15.732 Verification LBA range: start 0x8000 length 0x8000 00:28:15.732 nvme1n3 : 5.83 113.84 7.12 0.00 0.00 1001771.06 56956.74 2120030.02 00:28:15.732 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:15.732 Verification LBA range: start 0x0 length 0x2000 00:28:15.732 nvme2n1 : 5.82 133.25 8.33 0.00 0.00 823944.76 59101.56 1906501.82 00:28:15.732 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:15.732 Verification LBA range: start 0x2000 length 0x2000 00:28:15.732 nvme2n1 : 5.84 98.59 6.16 0.00 0.00 1120040.44 86269.21 2821622.69 00:28:15.732 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:15.732 Verification LBA range: start 0x0 length 0xbd0b 00:28:15.732 nvme3n1 : 6.00 207.91 12.99 0.00 0.00 515908.09 7506.85 617706.59 00:28:15.732 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:15.732 Verification LBA range: start 0xbd0b length 0xbd0b 00:28:15.732 nvme3n1 : 6.01 202.32 12.64 0.00 0.00 533597.83 1832.03 812169.77 00:28:15.732 [2024-11-20T11:42:21.498Z] =================================================================================================================== 00:28:15.732 [2024-11-20T11:42:21.498Z] Total : 1647.11 102.94 0.00 0.00 837411.92 1832.03 2821622.69 00:28:17.109 00:28:17.109 real 0m8.556s 00:28:17.109 user 0m15.420s 00:28:17.109 sys 0m0.631s 00:28:17.109 11:42:22 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:17.109 ************************************ 00:28:17.109 END TEST bdev_verify_big_io 00:28:17.109 11:42:22 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:28:17.109 ************************************ 00:28:17.109 11:42:22 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:17.109 11:42:22 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:28:17.109 11:42:22 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:17.109 11:42:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:28:17.109 ************************************ 00:28:17.109 START TEST bdev_write_zeroes 00:28:17.109 ************************************ 00:28:17.109 11:42:22 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:17.368 [2024-11-20 11:42:22.965941] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:28:17.368 [2024-11-20 11:42:22.966132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74954 ] 00:28:17.626 [2024-11-20 11:42:23.153180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.626 [2024-11-20 11:42:23.303642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.193 Running I/O for 1 seconds... 00:28:19.133 60096.00 IOPS, 234.75 MiB/s 00:28:19.133 Latency(us) 00:28:19.133 [2024-11-20T11:42:24.899Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.133 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:19.133 nvme0n1 : 1.02 8893.55 34.74 0.00 0.00 14377.58 8102.63 29550.78 00:28:19.133 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:19.133 nvme1n1 : 1.02 8879.73 34.69 0.00 0.00 14387.21 8281.37 30027.40 00:28:19.133 Job: nvme1n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:19.133 nvme1n2 : 1.03 8865.98 34.63 0.00 0.00 14396.33 8281.37 30504.03 00:28:19.133 Job: nvme1n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:19.134 nvme1n3 : 1.03 8852.39 34.58 0.00 0.00 14405.85 8281.37 30980.65 00:28:19.134 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:19.134 nvme2n1 : 1.03 8838.87 34.53 0.00 0.00 14414.98 8281.37 31457.28 00:28:19.134 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:19.134 nvme3n1 : 1.03 15030.21 58.71 0.00 0.00 8466.30 3559.80 29550.78 00:28:19.134 [2024-11-20T11:42:24.900Z] =================================================================================================================== 00:28:19.134 [2024-11-20T11:42:24.900Z] Total : 59360.73 231.88 0.00 0.00 12886.63 3559.80 31457.28 00:28:20.511 00:28:20.511 real 0m3.103s 00:28:20.511 user 0m2.266s 00:28:20.511 sys 0m0.642s 00:28:20.511 11:42:25 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:20.511 ************************************ 00:28:20.511 END TEST bdev_write_zeroes 00:28:20.511 11:42:25 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:28:20.511 ************************************ 00:28:20.511 11:42:25 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:20.511 11:42:25 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:28:20.511 11:42:25 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:20.511 11:42:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:28:20.511 ************************************ 00:28:20.511 START TEST bdev_json_nonenclosed 00:28:20.511 ************************************ 00:28:20.511 11:42:26 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:20.511 [2024-11-20 11:42:26.117686] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:28:20.511 [2024-11-20 11:42:26.117910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75009 ] 00:28:20.770 [2024-11-20 11:42:26.308044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.770 [2024-11-20 11:42:26.466043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.770 [2024-11-20 11:42:26.466185] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:28:20.770 [2024-11-20 11:42:26.466228] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:28:20.770 [2024-11-20 11:42:26.466247] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:21.030 00:28:21.030 real 0m0.738s 00:28:21.030 user 0m0.485s 00:28:21.030 sys 0m0.147s 00:28:21.030 11:42:26 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:21.030 11:42:26 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:28:21.030 ************************************ 00:28:21.030 END TEST bdev_json_nonenclosed 00:28:21.030 ************************************ 00:28:21.030 11:42:26 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:21.030 11:42:26 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:28:21.030 11:42:26 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:21.030 11:42:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:28:21.288 ************************************ 00:28:21.288 START TEST bdev_json_nonarray 00:28:21.288 ************************************ 00:28:21.288 11:42:26 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:21.288 [2024-11-20 11:42:26.909384] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:28:21.288 [2024-11-20 11:42:26.909864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75040 ] 00:28:21.547 [2024-11-20 11:42:27.102702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.547 [2024-11-20 11:42:27.258079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.547 [2024-11-20 11:42:27.258230] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:28:21.547 [2024-11-20 11:42:27.258269] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:28:21.547 [2024-11-20 11:42:27.258287] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:21.806 ************************************ 00:28:21.806 END TEST bdev_json_nonarray 00:28:21.806 ************************************ 00:28:21.806 00:28:21.806 real 0m0.726s 00:28:21.806 user 0m0.480s 00:28:21.806 sys 0m0.140s 00:28:21.806 11:42:27 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:21.806 11:42:27 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:28:21.806 11:42:27 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:28:21.806 11:42:27 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:28:21.806 11:42:27 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:28:21.806 11:42:27 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:28:21.806 11:42:27 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:28:22.065 11:42:27 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:28:22.065 11:42:27 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:22.065 11:42:27 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:28:22.065 11:42:27 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:28:22.065 11:42:27 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:28:22.065 11:42:27 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:28:22.065 11:42:27 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:22.634 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:23.201 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:23.201 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:28:23.201 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:23.460 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:28:23.460 ************************************ 00:28:23.460 END TEST blockdev_xnvme 00:28:23.460 ************************************ 00:28:23.460 00:28:23.460 real 0m58.462s 00:28:23.460 user 1m41.418s 00:28:23.460 sys 0m28.129s 00:28:23.460 11:42:29 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:23.460 11:42:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:28:23.460 11:42:29 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:28:23.460 11:42:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:23.460 11:42:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:23.460 11:42:29 -- common/autotest_common.sh@10 -- # set +x 00:28:23.460 ************************************ 00:28:23.460 START TEST ublk 00:28:23.460 ************************************ 00:28:23.460 11:42:29 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:28:23.460 * Looking for test storage... 00:28:23.460 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:28:23.460 11:42:29 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:23.460 11:42:29 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:28:23.460 11:42:29 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:23.727 11:42:29 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:23.727 11:42:29 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:23.727 11:42:29 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:23.727 11:42:29 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:23.727 11:42:29 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:28:23.727 11:42:29 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:28:23.727 11:42:29 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:28:23.727 11:42:29 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:28:23.727 11:42:29 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:28:23.728 11:42:29 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:28:23.728 11:42:29 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:28:23.728 11:42:29 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:23.728 11:42:29 ublk -- scripts/common.sh@344 -- # case "$op" in 00:28:23.728 11:42:29 ublk -- scripts/common.sh@345 -- # : 1 00:28:23.728 11:42:29 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:23.728 11:42:29 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:23.728 11:42:29 ublk -- scripts/common.sh@365 -- # decimal 1 00:28:23.728 11:42:29 ublk -- scripts/common.sh@353 -- # local d=1 00:28:23.728 11:42:29 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:23.728 11:42:29 ublk -- scripts/common.sh@355 -- # echo 1 00:28:23.728 11:42:29 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:28:23.728 11:42:29 ublk -- scripts/common.sh@366 -- # decimal 2 00:28:23.728 11:42:29 ublk -- scripts/common.sh@353 -- # local d=2 00:28:23.728 11:42:29 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:23.728 11:42:29 ublk -- scripts/common.sh@355 -- # echo 2 00:28:23.728 11:42:29 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:28:23.728 11:42:29 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:23.728 11:42:29 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:23.728 11:42:29 ublk -- scripts/common.sh@368 -- # return 0 00:28:23.728 11:42:29 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:23.728 11:42:29 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:23.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.728 --rc genhtml_branch_coverage=1 00:28:23.728 --rc genhtml_function_coverage=1 00:28:23.728 --rc genhtml_legend=1 00:28:23.728 --rc geninfo_all_blocks=1 00:28:23.728 --rc geninfo_unexecuted_blocks=1 00:28:23.728 00:28:23.728 ' 00:28:23.729 11:42:29 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:23.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.729 --rc genhtml_branch_coverage=1 00:28:23.729 --rc genhtml_function_coverage=1 00:28:23.729 --rc genhtml_legend=1 00:28:23.729 --rc geninfo_all_blocks=1 00:28:23.729 --rc geninfo_unexecuted_blocks=1 00:28:23.729 00:28:23.729 ' 00:28:23.729 11:42:29 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:23.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.729 --rc genhtml_branch_coverage=1 00:28:23.729 --rc genhtml_function_coverage=1 00:28:23.729 --rc genhtml_legend=1 00:28:23.729 --rc geninfo_all_blocks=1 00:28:23.729 --rc geninfo_unexecuted_blocks=1 00:28:23.729 00:28:23.729 ' 00:28:23.729 11:42:29 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:23.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.729 --rc genhtml_branch_coverage=1 00:28:23.729 --rc genhtml_function_coverage=1 00:28:23.729 --rc genhtml_legend=1 00:28:23.729 --rc geninfo_all_blocks=1 00:28:23.729 --rc geninfo_unexecuted_blocks=1 00:28:23.729 00:28:23.729 ' 00:28:23.729 11:42:29 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:28:23.729 11:42:29 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:28:23.729 11:42:29 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:28:23.729 11:42:29 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:28:23.729 11:42:29 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:28:23.729 11:42:29 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:28:23.729 11:42:29 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:28:23.729 11:42:29 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:28:23.730 11:42:29 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:28:23.730 11:42:29 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:28:23.730 11:42:29 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:28:23.730 11:42:29 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:28:23.730 11:42:29 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:28:23.730 11:42:29 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:28:23.730 11:42:29 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:28:23.730 11:42:29 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:28:23.730 11:42:29 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:28:23.730 11:42:29 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:28:23.730 11:42:29 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:28:23.730 11:42:29 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:28:23.730 11:42:29 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:23.730 11:42:29 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:23.730 11:42:29 ublk -- common/autotest_common.sh@10 -- # set +x 00:28:23.730 ************************************ 00:28:23.730 START TEST test_save_ublk_config 00:28:23.730 ************************************ 00:28:23.730 11:42:29 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:28:23.730 11:42:29 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:28:23.730 11:42:29 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75328 00:28:23.730 11:42:29 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:28:23.730 11:42:29 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:28:23.730 11:42:29 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75328 00:28:23.730 11:42:29 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75328 ']' 00:28:23.731 11:42:29 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:23.731 11:42:29 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:23.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:23.731 11:42:29 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:23.731 11:42:29 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:23.731 11:42:29 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:28:23.731 [2024-11-20 11:42:29.455567] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:28:23.731 [2024-11-20 11:42:29.455758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75328 ] 00:28:23.992 [2024-11-20 11:42:29.646215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.252 [2024-11-20 11:42:29.786830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.188 11:42:30 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:25.188 11:42:30 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:28:25.188 11:42:30 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:28:25.188 11:42:30 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:28:25.188 11:42:30 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.188 11:42:30 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:28:25.188 [2024-11-20 11:42:30.706632] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:28:25.188 [2024-11-20 11:42:30.707775] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:28:25.188 malloc0 00:28:25.188 [2024-11-20 11:42:30.798843] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:28:25.188 [2024-11-20 11:42:30.798975] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:28:25.188 [2024-11-20 11:42:30.798995] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:28:25.188 [2024-11-20 11:42:30.799004] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:28:25.188 [2024-11-20 11:42:30.810743] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:28:25.188 [2024-11-20 11:42:30.810775] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:28:25.188 [2024-11-20 11:42:30.818603] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:28:25.188 [2024-11-20 11:42:30.818741] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:28:25.188 [2024-11-20 11:42:30.839607] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:28:25.188 0 00:28:25.188 11:42:30 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.188 11:42:30 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:28:25.188 11:42:30 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.188 11:42:30 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:28:25.448 11:42:31 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.448 11:42:31 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:28:25.448 "subsystems": [ 00:28:25.448 { 00:28:25.448 "subsystem": "fsdev", 00:28:25.448 "config": [ 00:28:25.448 { 00:28:25.448 "method": "fsdev_set_opts", 00:28:25.448 "params": { 00:28:25.448 "fsdev_io_pool_size": 65535, 00:28:25.448 "fsdev_io_cache_size": 256 00:28:25.448 } 00:28:25.448 } 00:28:25.448 ] 00:28:25.448 }, 00:28:25.448 { 00:28:25.448 "subsystem": "keyring", 00:28:25.448 "config": [] 00:28:25.448 }, 00:28:25.448 { 00:28:25.448 "subsystem": "iobuf", 00:28:25.448 "config": [ 00:28:25.448 { 00:28:25.448 "method": "iobuf_set_options", 00:28:25.448 "params": { 00:28:25.448 "small_pool_count": 8192, 00:28:25.448 "large_pool_count": 1024, 00:28:25.448 "small_bufsize": 8192, 00:28:25.448 "large_bufsize": 135168, 00:28:25.448 "enable_numa": false 00:28:25.448 } 00:28:25.448 } 00:28:25.448 ] 00:28:25.448 }, 00:28:25.448 { 00:28:25.448 "subsystem": "sock", 00:28:25.448 "config": [ 00:28:25.448 { 00:28:25.448 "method": "sock_set_default_impl", 00:28:25.448 "params": { 00:28:25.448 "impl_name": "posix" 00:28:25.448 } 00:28:25.448 }, 00:28:25.448 { 00:28:25.448 "method": "sock_impl_set_options", 00:28:25.448 "params": { 00:28:25.448 "impl_name": "ssl", 00:28:25.448 "recv_buf_size": 4096, 00:28:25.448 "send_buf_size": 4096, 00:28:25.448 "enable_recv_pipe": true, 00:28:25.448 "enable_quickack": false, 00:28:25.448 "enable_placement_id": 0, 00:28:25.448 "enable_zerocopy_send_server": true, 00:28:25.448 "enable_zerocopy_send_client": false, 00:28:25.448 "zerocopy_threshold": 0, 00:28:25.448 "tls_version": 0, 00:28:25.448 "enable_ktls": false 00:28:25.448 } 00:28:25.448 }, 00:28:25.448 { 00:28:25.448 "method": "sock_impl_set_options", 00:28:25.448 "params": { 00:28:25.448 "impl_name": "posix", 00:28:25.448 "recv_buf_size": 2097152, 00:28:25.448 "send_buf_size": 2097152, 00:28:25.448 "enable_recv_pipe": true, 00:28:25.448 "enable_quickack": false, 00:28:25.448 "enable_placement_id": 0, 00:28:25.448 "enable_zerocopy_send_server": true, 00:28:25.448 "enable_zerocopy_send_client": false, 00:28:25.448 "zerocopy_threshold": 0, 00:28:25.448 "tls_version": 0, 00:28:25.448 "enable_ktls": false 00:28:25.448 } 00:28:25.448 } 00:28:25.448 ] 00:28:25.448 }, 00:28:25.448 { 00:28:25.448 "subsystem": "vmd", 00:28:25.448 "config": [] 00:28:25.448 }, 00:28:25.448 { 00:28:25.448 "subsystem": "accel", 00:28:25.448 "config": [ 00:28:25.448 { 00:28:25.448 "method": "accel_set_options", 00:28:25.448 "params": { 00:28:25.448 "small_cache_size": 128, 00:28:25.448 "large_cache_size": 16, 00:28:25.448 "task_count": 2048, 00:28:25.448 "sequence_count": 2048, 00:28:25.448 "buf_count": 2048 00:28:25.448 } 00:28:25.448 } 00:28:25.448 ] 00:28:25.448 }, 00:28:25.448 { 00:28:25.448 "subsystem": "bdev", 00:28:25.448 "config": [ 00:28:25.448 { 00:28:25.448 "method": "bdev_set_options", 00:28:25.448 "params": { 00:28:25.448 "bdev_io_pool_size": 65535, 00:28:25.448 "bdev_io_cache_size": 256, 00:28:25.448 "bdev_auto_examine": true, 00:28:25.448 "iobuf_small_cache_size": 128, 00:28:25.448 "iobuf_large_cache_size": 16 00:28:25.448 } 00:28:25.448 }, 00:28:25.448 { 00:28:25.448 "method": "bdev_raid_set_options", 00:28:25.448 "params": { 00:28:25.448 "process_window_size_kb": 1024, 00:28:25.448 "process_max_bandwidth_mb_sec": 0 00:28:25.448 } 00:28:25.448 }, 00:28:25.448 { 00:28:25.448 "method": "bdev_iscsi_set_options", 00:28:25.448 "params": { 00:28:25.448 "timeout_sec": 30 00:28:25.448 } 00:28:25.448 }, 00:28:25.448 { 00:28:25.448 "method": "bdev_nvme_set_options", 00:28:25.448 "params": { 00:28:25.448 "action_on_timeout": "none", 00:28:25.448 "timeout_us": 0, 00:28:25.448 "timeout_admin_us": 0, 00:28:25.448 "keep_alive_timeout_ms": 10000, 00:28:25.448 "arbitration_burst": 0, 00:28:25.448 "low_priority_weight": 0, 00:28:25.448 "medium_priority_weight": 0, 00:28:25.448 "high_priority_weight": 0, 00:28:25.448 "nvme_adminq_poll_period_us": 10000, 00:28:25.448 "nvme_ioq_poll_period_us": 0, 00:28:25.448 "io_queue_requests": 0, 00:28:25.448 "delay_cmd_submit": true, 00:28:25.448 "transport_retry_count": 4, 00:28:25.448 "bdev_retry_count": 3, 00:28:25.448 "transport_ack_timeout": 0, 00:28:25.448 "ctrlr_loss_timeout_sec": 0, 00:28:25.448 "reconnect_delay_sec": 0, 00:28:25.448 "fast_io_fail_timeout_sec": 0, 00:28:25.448 "disable_auto_failback": false, 00:28:25.448 "generate_uuids": false, 00:28:25.448 "transport_tos": 0, 00:28:25.448 "nvme_error_stat": false, 00:28:25.448 "rdma_srq_size": 0, 00:28:25.448 "io_path_stat": false, 00:28:25.448 "allow_accel_sequence": false, 00:28:25.448 "rdma_max_cq_size": 0, 00:28:25.448 "rdma_cm_event_timeout_ms": 0, 00:28:25.448 "dhchap_digests": [ 00:28:25.448 "sha256", 00:28:25.448 "sha384", 00:28:25.448 "sha512" 00:28:25.448 ], 00:28:25.448 "dhchap_dhgroups": [ 00:28:25.448 "null", 00:28:25.448 "ffdhe2048", 00:28:25.448 "ffdhe3072", 00:28:25.448 "ffdhe4096", 00:28:25.448 "ffdhe6144", 00:28:25.448 "ffdhe8192" 00:28:25.448 ] 00:28:25.448 } 00:28:25.448 }, 00:28:25.448 { 00:28:25.448 "method": "bdev_nvme_set_hotplug", 00:28:25.448 "params": { 00:28:25.448 "period_us": 100000, 00:28:25.448 "enable": false 00:28:25.448 } 00:28:25.448 }, 00:28:25.448 { 00:28:25.448 "method": "bdev_malloc_create", 00:28:25.448 "params": { 00:28:25.448 "name": "malloc0", 00:28:25.448 "num_blocks": 8192, 00:28:25.448 "block_size": 4096, 00:28:25.448 "physical_block_size": 4096, 00:28:25.448 "uuid": "bd3c08b7-57fd-4ebd-8d9a-bfae26b975f7", 00:28:25.448 "optimal_io_boundary": 0, 00:28:25.448 "md_size": 0, 00:28:25.448 "dif_type": 0, 00:28:25.448 "dif_is_head_of_md": false, 00:28:25.448 "dif_pi_format": 0 00:28:25.448 } 00:28:25.448 }, 00:28:25.448 { 00:28:25.448 "method": "bdev_wait_for_examine" 00:28:25.448 } 00:28:25.448 ] 00:28:25.448 }, 00:28:25.448 { 00:28:25.448 "subsystem": "scsi", 00:28:25.448 "config": null 00:28:25.448 }, 00:28:25.448 { 00:28:25.448 "subsystem": "scheduler", 00:28:25.448 "config": [ 00:28:25.448 { 00:28:25.448 "method": "framework_set_scheduler", 00:28:25.448 "params": { 00:28:25.448 "name": "static" 00:28:25.448 } 00:28:25.448 } 00:28:25.448 ] 00:28:25.448 }, 00:28:25.448 { 00:28:25.448 "subsystem": "vhost_scsi", 00:28:25.448 "config": [] 00:28:25.448 }, 00:28:25.448 { 00:28:25.448 "subsystem": "vhost_blk", 00:28:25.448 "config": [] 00:28:25.448 }, 00:28:25.448 { 00:28:25.448 "subsystem": "ublk", 00:28:25.448 "config": [ 00:28:25.448 { 00:28:25.448 "method": "ublk_create_target", 00:28:25.448 "params": { 00:28:25.448 "cpumask": "1" 00:28:25.448 } 00:28:25.448 }, 00:28:25.448 { 00:28:25.448 "method": "ublk_start_disk", 00:28:25.448 "params": { 00:28:25.448 "bdev_name": "malloc0", 00:28:25.448 "ublk_id": 0, 00:28:25.448 "num_queues": 1, 00:28:25.448 "queue_depth": 128 00:28:25.448 } 00:28:25.448 } 00:28:25.448 ] 00:28:25.448 }, 00:28:25.448 { 00:28:25.448 "subsystem": "nbd", 00:28:25.448 "config": [] 00:28:25.448 }, 00:28:25.448 { 00:28:25.448 "subsystem": "nvmf", 00:28:25.448 "config": [ 00:28:25.448 { 00:28:25.448 "method": "nvmf_set_config", 00:28:25.448 "params": { 00:28:25.448 "discovery_filter": "match_any", 00:28:25.448 "admin_cmd_passthru": { 00:28:25.448 "identify_ctrlr": false 00:28:25.448 }, 00:28:25.448 "dhchap_digests": [ 00:28:25.448 "sha256", 00:28:25.448 "sha384", 00:28:25.448 "sha512" 00:28:25.448 ], 00:28:25.448 "dhchap_dhgroups": [ 00:28:25.448 "null", 00:28:25.448 "ffdhe2048", 00:28:25.448 "ffdhe3072", 00:28:25.448 "ffdhe4096", 00:28:25.448 "ffdhe6144", 00:28:25.448 "ffdhe8192" 00:28:25.448 ] 00:28:25.449 } 00:28:25.449 }, 00:28:25.449 { 00:28:25.449 "method": "nvmf_set_max_subsystems", 00:28:25.449 "params": { 00:28:25.449 "max_subsystems": 1024 00:28:25.449 } 00:28:25.449 }, 00:28:25.449 { 00:28:25.449 "method": "nvmf_set_crdt", 00:28:25.449 "params": { 00:28:25.449 "crdt1": 0, 00:28:25.449 "crdt2": 0, 00:28:25.449 "crdt3": 0 00:28:25.449 } 00:28:25.449 } 00:28:25.449 ] 00:28:25.449 }, 00:28:25.449 { 00:28:25.449 "subsystem": "iscsi", 00:28:25.449 "config": [ 00:28:25.449 { 00:28:25.449 "method": "iscsi_set_options", 00:28:25.449 "params": { 00:28:25.449 "node_base": "iqn.2016-06.io.spdk", 00:28:25.449 "max_sessions": 128, 00:28:25.449 "max_connections_per_session": 2, 00:28:25.449 "max_queue_depth": 64, 00:28:25.449 "default_time2wait": 2, 00:28:25.449 "default_time2retain": 20, 00:28:25.449 "first_burst_length": 8192, 00:28:25.449 "immediate_data": true, 00:28:25.449 "allow_duplicated_isid": false, 00:28:25.449 "error_recovery_level": 0, 00:28:25.449 "nop_timeout": 60, 00:28:25.449 "nop_in_interval": 30, 00:28:25.449 "disable_chap": false, 00:28:25.449 "require_chap": false, 00:28:25.449 "mutual_chap": false, 00:28:25.449 "chap_group": 0, 00:28:25.449 "max_large_datain_per_connection": 64, 00:28:25.449 "max_r2t_per_connection": 4, 00:28:25.449 "pdu_pool_size": 36864, 00:28:25.449 "immediate_data_pool_size": 16384, 00:28:25.449 "data_out_pool_size": 2048 00:28:25.449 } 00:28:25.449 } 00:28:25.449 ] 00:28:25.449 } 00:28:25.449 ] 00:28:25.449 }' 00:28:25.449 11:42:31 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75328 00:28:25.449 11:42:31 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75328 ']' 00:28:25.449 11:42:31 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75328 00:28:25.449 11:42:31 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:28:25.449 11:42:31 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:25.449 11:42:31 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75328 00:28:25.449 11:42:31 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:25.449 11:42:31 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:25.449 killing process with pid 75328 00:28:25.449 11:42:31 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75328' 00:28:25.449 11:42:31 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75328 00:28:25.449 11:42:31 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75328 00:28:28.005 [2024-11-20 11:42:33.272373] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:28:28.005 [2024-11-20 11:42:33.307736] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:28:28.005 [2024-11-20 11:42:33.307887] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:28:28.005 [2024-11-20 11:42:33.318642] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:28:28.005 [2024-11-20 11:42:33.318707] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:28:28.005 [2024-11-20 11:42:33.318728] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:28:28.005 [2024-11-20 11:42:33.318763] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:28:28.005 [2024-11-20 11:42:33.318953] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:28:29.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:29.383 11:42:35 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75404 00:28:29.383 11:42:35 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75404 00:28:29.383 11:42:35 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:28:29.383 11:42:35 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75404 ']' 00:28:29.383 11:42:35 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:29.383 11:42:35 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:29.383 11:42:35 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:28:29.383 "subsystems": [ 00:28:29.383 { 00:28:29.383 "subsystem": "fsdev", 00:28:29.383 "config": [ 00:28:29.383 { 00:28:29.383 "method": "fsdev_set_opts", 00:28:29.383 "params": { 00:28:29.383 "fsdev_io_pool_size": 65535, 00:28:29.383 "fsdev_io_cache_size": 256 00:28:29.383 } 00:28:29.383 } 00:28:29.383 ] 00:28:29.383 }, 00:28:29.383 { 00:28:29.383 "subsystem": "keyring", 00:28:29.383 "config": [] 00:28:29.383 }, 00:28:29.383 { 00:28:29.383 "subsystem": "iobuf", 00:28:29.383 "config": [ 00:28:29.383 { 00:28:29.383 "method": "iobuf_set_options", 00:28:29.383 "params": { 00:28:29.383 "small_pool_count": 8192, 00:28:29.383 "large_pool_count": 1024, 00:28:29.383 "small_bufsize": 8192, 00:28:29.383 "large_bufsize": 135168, 00:28:29.383 "enable_numa": false 00:28:29.383 } 00:28:29.383 } 00:28:29.383 ] 00:28:29.383 }, 00:28:29.383 { 00:28:29.383 "subsystem": "sock", 00:28:29.383 "config": [ 00:28:29.383 { 00:28:29.383 "method": "sock_set_default_impl", 00:28:29.383 "params": { 00:28:29.383 "impl_name": "posix" 00:28:29.383 } 00:28:29.383 }, 00:28:29.383 { 00:28:29.383 "method": "sock_impl_set_options", 00:28:29.383 "params": { 00:28:29.383 "impl_name": "ssl", 00:28:29.383 "recv_buf_size": 4096, 00:28:29.383 "send_buf_size": 4096, 00:28:29.383 "enable_recv_pipe": true, 00:28:29.383 "enable_quickack": false, 00:28:29.383 "enable_placement_id": 0, 00:28:29.383 "enable_zerocopy_send_server": true, 00:28:29.383 "enable_zerocopy_send_client": false, 00:28:29.384 "zerocopy_threshold": 0, 00:28:29.384 "tls_version": 0, 00:28:29.384 "enable_ktls": false 00:28:29.384 } 00:28:29.384 }, 00:28:29.384 { 00:28:29.384 "method": "sock_impl_set_options", 00:28:29.384 "params": { 00:28:29.384 "impl_name": "posix", 00:28:29.384 "recv_buf_size": 2097152, 00:28:29.384 "send_buf_size": 2097152, 00:28:29.384 "enable_recv_pipe": true, 00:28:29.384 "enable_quickack": false, 00:28:29.384 "enable_placement_id": 0, 00:28:29.384 "enable_zerocopy_send_server": true, 00:28:29.384 "enable_zerocopy_send_client": false, 00:28:29.384 "zerocopy_threshold": 0, 00:28:29.384 "tls_version": 0, 00:28:29.384 "enable_ktls": false 00:28:29.384 } 00:28:29.384 } 00:28:29.384 ] 00:28:29.384 }, 00:28:29.384 { 00:28:29.384 "subsystem": "vmd", 00:28:29.384 "config": [] 00:28:29.384 }, 00:28:29.384 { 00:28:29.384 "subsystem": "accel", 00:28:29.384 "config": [ 00:28:29.384 { 00:28:29.384 "method": "accel_set_options", 00:28:29.384 "params": { 00:28:29.384 "small_cache_size": 128, 00:28:29.384 "large_cache_size": 16, 00:28:29.384 "task_count": 2048, 00:28:29.384 "sequence_count": 2048, 00:28:29.384 "buf_count": 2048 00:28:29.384 } 00:28:29.384 } 00:28:29.384 ] 00:28:29.384 }, 00:28:29.384 { 00:28:29.384 "subsystem": "bdev", 00:28:29.384 "config": [ 00:28:29.384 { 00:28:29.384 "method": "bdev_set_options", 00:28:29.384 "params": { 00:28:29.384 "bdev_io_pool_size": 65535, 00:28:29.384 "bdev_io_cache_size": 256, 00:28:29.384 "bdev_auto_examine": true, 00:28:29.384 "iobuf_small_cache_size": 128, 00:28:29.384 "iobuf_large_cache_size": 16 00:28:29.384 } 00:28:29.384 }, 00:28:29.384 { 00:28:29.384 "method": "bdev_raid_set_options", 00:28:29.384 "params": { 00:28:29.384 "process_window_size_kb": 1024, 00:28:29.384 "process_max_bandwidth_mb_sec": 0 00:28:29.384 } 00:28:29.384 }, 00:28:29.384 { 00:28:29.384 "method": "bdev_iscsi_set_options", 00:28:29.384 "params": { 00:28:29.384 "timeout_sec": 30 00:28:29.384 } 00:28:29.384 }, 00:28:29.384 { 00:28:29.384 "method": "bdev_nvme_set_options", 00:28:29.384 "params": { 00:28:29.384 "action_on_timeout": "none", 00:28:29.384 "timeout_us": 0, 00:28:29.384 "timeout_admin_us": 0, 00:28:29.384 "keep_alive_timeout_ms": 10000, 00:28:29.384 "arbitration_burst": 0, 00:28:29.384 "low_priority_weight": 0, 00:28:29.384 "medium_priority_weight": 0, 00:28:29.384 "high_priority_weight": 0, 00:28:29.384 "nvme_adminq_poll_period_us": 10000, 00:28:29.384 "nvme_ioq_poll_period_us": 0, 00:28:29.384 "io_queue_requests": 0, 00:28:29.384 "delay_cmd_submit": true, 00:28:29.384 "transport_retry_count": 4, 00:28:29.384 "bdev_retry_count": 3, 00:28:29.384 "transport_ack_timeout": 0, 00:28:29.384 "ctrlr_loss_timeout_sec": 0, 00:28:29.384 "reconnect_delay_sec": 0, 00:28:29.384 "fast_io_fail_timeout_sec": 0, 00:28:29.384 "disable_auto_failback": false, 00:28:29.384 "generate_uuids": false, 00:28:29.384 "transport_tos": 0, 00:28:29.384 "nvme_error_stat": false, 00:28:29.384 "rdma_srq_size": 0, 00:28:29.384 "io_path_stat": false, 00:28:29.384 "allow_accel_sequence": false, 00:28:29.384 "rdma_max_cq_size": 0, 00:28:29.384 "rdma_cm_event_timeout_ms": 0, 00:28:29.384 "dhchap_digests": [ 00:28:29.384 "sha256", 00:28:29.384 "sha384", 00:28:29.384 "sha512" 00:28:29.384 ], 00:28:29.384 "dhchap_dhgroups": [ 00:28:29.384 "null", 00:28:29.384 "ffdhe2048", 00:28:29.384 "ffdhe3072", 00:28:29.384 "ffdhe4096", 00:28:29.384 "ffdhe6144", 00:28:29.384 "ffdhe8192" 00:28:29.384 ] 00:28:29.384 } 00:28:29.384 }, 00:28:29.384 { 00:28:29.384 "method": "bdev_nvme_set_hotplug", 00:28:29.384 "params": { 00:28:29.384 "period_us": 100000, 00:28:29.384 "enable": false 00:28:29.384 } 00:28:29.384 }, 00:28:29.384 { 00:28:29.384 "method": "bdev_malloc_create", 00:28:29.384 "params": { 00:28:29.384 "name": "malloc0", 00:28:29.384 "num_blocks": 8192, 00:28:29.384 "block_size": 4096, 00:28:29.384 "physical_block_size": 4096, 00:28:29.384 "uuid": "bd3c08b7-57fd-4ebd-8d9a-bfae26b975f7", 00:28:29.384 "optimal_io_boundary": 0, 00:28:29.384 "md_size": 0, 00:28:29.384 "dif_type": 0, 00:28:29.384 "dif_is_head_of_md": false, 00:28:29.384 "dif_pi_format": 0 00:28:29.384 } 00:28:29.384 }, 00:28:29.384 { 00:28:29.384 "method": "bdev_wait_for_examine" 00:28:29.384 } 00:28:29.384 ] 00:28:29.384 }, 00:28:29.384 { 00:28:29.384 "subsystem": "scsi", 00:28:29.384 "config": null 00:28:29.384 }, 00:28:29.384 { 00:28:29.384 "subsystem": "scheduler", 00:28:29.384 "config": [ 00:28:29.384 { 00:28:29.384 "method": "framework_set_scheduler", 00:28:29.384 "params": { 00:28:29.384 "name": "static" 00:28:29.384 } 00:28:29.384 } 00:28:29.384 ] 00:28:29.384 }, 00:28:29.384 { 00:28:29.384 "subsystem": "vhost_scsi", 00:28:29.384 "config": [] 00:28:29.384 }, 00:28:29.384 { 00:28:29.384 "subsystem": "vhost_blk", 00:28:29.384 "config": [] 00:28:29.384 }, 00:28:29.384 { 00:28:29.384 "subsystem": "ublk", 00:28:29.384 "config": [ 00:28:29.384 { 00:28:29.384 "method": "ublk_create_target", 00:28:29.384 "params": { 00:28:29.384 "cpumask": "1" 00:28:29.384 } 00:28:29.384 }, 00:28:29.384 { 00:28:29.384 "method": "ublk_start_disk", 00:28:29.384 "params": { 00:28:29.384 "bdev_name": "malloc0", 00:28:29.384 "ublk_id": 0, 00:28:29.384 "num_queues": 1, 00:28:29.384 "queue_depth": 128 00:28:29.384 } 00:28:29.384 } 00:28:29.384 ] 00:28:29.384 }, 00:28:29.384 { 00:28:29.384 "subsystem": "nbd", 00:28:29.384 "config": [] 00:28:29.384 }, 00:28:29.384 { 00:28:29.384 "subsystem": "nvmf", 00:28:29.384 "config": [ 00:28:29.384 { 00:28:29.384 "method": "nvmf_set_config", 00:28:29.384 "params": { 00:28:29.384 "discovery_filter": "match_any", 00:28:29.384 "admin_cmd_passthru": { 00:28:29.384 "identify_ctrlr": false 00:28:29.384 }, 00:28:29.384 "dhchap_digests": [ 00:28:29.384 "sha256", 00:28:29.384 "sha384", 00:28:29.384 "sha512" 00:28:29.384 ], 00:28:29.384 "dhchap_dhgroups": [ 00:28:29.384 "null", 00:28:29.384 "ffdhe2048", 00:28:29.384 "ffdhe3072", 00:28:29.384 "ffdhe4096", 00:28:29.384 "ffdhe6144", 00:28:29.384 "ffdhe8192" 00:28:29.384 ] 00:28:29.384 } 00:28:29.384 }, 00:28:29.384 { 00:28:29.384 "method": "nvmf_set_max_subsystems", 00:28:29.384 "params": { 00:28:29.384 "max_subsystems": 1024 00:28:29.384 } 00:28:29.384 }, 00:28:29.384 { 00:28:29.384 "method": "nvmf_set_crdt", 00:28:29.384 "params": { 00:28:29.384 "crdt1": 0, 00:28:29.384 "crdt2": 0, 00:28:29.384 "crdt3": 0 00:28:29.384 } 00:28:29.384 } 00:28:29.384 ] 00:28:29.384 }, 00:28:29.384 { 00:28:29.384 "subsystem": "iscsi", 00:28:29.384 "config": [ 00:28:29.384 { 00:28:29.384 "method": "iscsi_set_options", 00:28:29.384 "params": { 00:28:29.384 "node_base": "iqn.2016-06.io.spdk", 00:28:29.384 "max_sessions": 128, 00:28:29.384 "max_connections_per_session": 2, 00:28:29.384 "max_queue_depth": 64, 00:28:29.384 "default_time2wait": 2, 00:28:29.384 "default_time2retain": 20, 00:28:29.384 "first_burst_length": 8192, 00:28:29.384 "immediate_data": true, 00:28:29.384 "allow_duplicated_isid": false, 00:28:29.384 "error_recovery_level": 0, 00:28:29.384 "nop_timeout": 60, 00:28:29.384 "nop_in_interval": 30, 00:28:29.384 "disable_chap": false, 00:28:29.384 "require_chap": false, 00:28:29.384 "mutual_chap": false, 00:28:29.384 "chap_group": 0, 00:28:29.384 "max_large_datain_per_connection": 64, 00:28:29.384 "max_r2t_per_connection": 4, 00:28:29.384 "pdu_pool_size": 36864, 00:28:29.384 "immediate_data_pool_size": 16384, 00:28:29.384 "data_out_pool_size": 2048 00:28:29.384 } 00:28:29.384 } 00:28:29.384 ] 00:28:29.384 } 00:28:29.384 ] 00:28:29.384 }' 00:28:29.384 11:42:35 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:29.384 11:42:35 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:29.384 11:42:35 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:28:29.644 [2024-11-20 11:42:35.273033] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:28:29.644 [2024-11-20 11:42:35.273465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75404 ] 00:28:29.902 [2024-11-20 11:42:35.458058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.902 [2024-11-20 11:42:35.621868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.278 [2024-11-20 11:42:36.695686] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:28:31.278 [2024-11-20 11:42:36.696923] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:28:31.278 [2024-11-20 11:42:36.703824] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:28:31.279 [2024-11-20 11:42:36.703973] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:28:31.279 [2024-11-20 11:42:36.703992] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:28:31.279 [2024-11-20 11:42:36.704002] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:28:31.279 [2024-11-20 11:42:36.710864] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:28:31.279 [2024-11-20 11:42:36.710896] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:28:31.279 [2024-11-20 11:42:36.718575] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:28:31.279 [2024-11-20 11:42:36.718691] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:28:31.279 [2024-11-20 11:42:36.735572] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:28:31.279 11:42:36 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:31.279 11:42:36 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:28:31.279 11:42:36 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:28:31.279 11:42:36 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.279 11:42:36 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:28:31.279 11:42:36 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:28:31.279 11:42:36 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.279 11:42:36 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:28:31.279 11:42:36 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:28:31.279 11:42:36 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75404 00:28:31.279 11:42:36 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75404 ']' 00:28:31.279 11:42:36 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75404 00:28:31.279 11:42:36 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:28:31.279 11:42:36 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:31.279 11:42:36 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75404 00:28:31.279 killing process with pid 75404 00:28:31.279 11:42:36 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:31.279 11:42:36 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:31.279 11:42:36 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75404' 00:28:31.279 11:42:36 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75404 00:28:31.279 11:42:36 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75404 00:28:32.655 [2024-11-20 11:42:38.371420] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:28:32.655 [2024-11-20 11:42:38.402845] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:28:32.655 [2024-11-20 11:42:38.403008] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:28:32.655 [2024-11-20 11:42:38.410669] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:28:32.655 [2024-11-20 11:42:38.410734] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:28:32.655 [2024-11-20 11:42:38.410748] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:28:32.655 [2024-11-20 11:42:38.410783] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:28:32.655 [2024-11-20 11:42:38.410957] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:28:34.556 11:42:40 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:28:34.556 00:28:34.556 real 0m10.930s 00:28:34.556 user 0m8.028s 00:28:34.556 sys 0m4.012s 00:28:34.556 ************************************ 00:28:34.556 END TEST test_save_ublk_config 00:28:34.556 ************************************ 00:28:34.556 11:42:40 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:34.556 11:42:40 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:28:34.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:34.556 11:42:40 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75491 00:28:34.556 11:42:40 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:34.556 11:42:40 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:28:34.556 11:42:40 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75491 00:28:34.556 11:42:40 ublk -- common/autotest_common.sh@835 -- # '[' -z 75491 ']' 00:28:34.556 11:42:40 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:34.556 11:42:40 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:34.556 11:42:40 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:34.556 11:42:40 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:34.556 11:42:40 ublk -- common/autotest_common.sh@10 -- # set +x 00:28:34.814 [2024-11-20 11:42:40.426955] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:28:34.814 [2024-11-20 11:42:40.427164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75491 ] 00:28:35.073 [2024-11-20 11:42:40.619871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:35.073 [2024-11-20 11:42:40.781015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.073 [2024-11-20 11:42:40.781028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.010 11:42:41 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:36.010 11:42:41 ublk -- common/autotest_common.sh@868 -- # return 0 00:28:36.010 11:42:41 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:28:36.010 11:42:41 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:36.010 11:42:41 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:36.010 11:42:41 ublk -- common/autotest_common.sh@10 -- # set +x 00:28:36.010 ************************************ 00:28:36.010 START TEST test_create_ublk 00:28:36.010 ************************************ 00:28:36.010 11:42:41 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:28:36.010 11:42:41 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:28:36.010 11:42:41 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.010 11:42:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:36.010 [2024-11-20 11:42:41.696654] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:28:36.010 [2024-11-20 11:42:41.699653] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:28:36.010 11:42:41 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.010 11:42:41 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:28:36.010 11:42:41 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:28:36.010 11:42:41 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.010 11:42:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:36.269 11:42:41 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.269 11:42:41 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:28:36.269 11:42:41 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:28:36.269 11:42:41 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.269 11:42:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:36.269 [2024-11-20 11:42:42.002767] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:28:36.269 [2024-11-20 11:42:42.003316] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:28:36.269 [2024-11-20 11:42:42.003345] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:28:36.269 [2024-11-20 11:42:42.003357] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:28:36.269 [2024-11-20 11:42:42.011152] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:28:36.269 [2024-11-20 11:42:42.011182] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:28:36.269 [2024-11-20 11:42:42.017609] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:28:36.269 [2024-11-20 11:42:42.028734] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:28:36.527 [2024-11-20 11:42:42.044570] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:28:36.527 11:42:42 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.527 11:42:42 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:28:36.528 11:42:42 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:28:36.528 11:42:42 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:28:36.528 11:42:42 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.528 11:42:42 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:36.528 11:42:42 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.528 11:42:42 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:28:36.528 { 00:28:36.528 "ublk_device": "/dev/ublkb0", 00:28:36.528 "id": 0, 00:28:36.528 "queue_depth": 512, 00:28:36.528 "num_queues": 4, 00:28:36.528 "bdev_name": "Malloc0" 00:28:36.528 } 00:28:36.528 ]' 00:28:36.528 11:42:42 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:28:36.528 11:42:42 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:28:36.528 11:42:42 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:28:36.528 11:42:42 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:28:36.528 11:42:42 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:28:36.528 11:42:42 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:28:36.528 11:42:42 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:28:36.528 11:42:42 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:28:36.528 11:42:42 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:28:36.786 11:42:42 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:28:36.786 11:42:42 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:28:36.786 11:42:42 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:28:36.786 11:42:42 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:28:36.786 11:42:42 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:28:36.786 11:42:42 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:28:36.786 11:42:42 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:28:36.786 11:42:42 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:28:36.786 11:42:42 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:28:36.786 11:42:42 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:28:36.786 11:42:42 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:28:36.786 11:42:42 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:28:36.786 11:42:42 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:28:36.786 fio: verification read phase will never start because write phase uses all of runtime 00:28:36.786 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:28:36.786 fio-3.35 00:28:36.786 Starting 1 process 00:28:49.018 00:28:49.018 fio_test: (groupid=0, jobs=1): err= 0: pid=75542: Wed Nov 20 11:42:52 2024 00:28:49.018 write: IOPS=10.5k, BW=41.1MiB/s (43.1MB/s)(411MiB/10001msec); 0 zone resets 00:28:49.018 clat (usec): min=66, max=12376, avg=93.64, stdev=170.60 00:28:49.018 lat (usec): min=67, max=12454, avg=94.36, stdev=170.67 00:28:49.018 clat percentiles (usec): 00:28:49.018 | 1.00th=[ 74], 5.00th=[ 76], 10.00th=[ 76], 20.00th=[ 77], 00:28:49.018 | 30.00th=[ 78], 40.00th=[ 79], 50.00th=[ 80], 60.00th=[ 82], 00:28:49.018 | 70.00th=[ 87], 80.00th=[ 91], 90.00th=[ 97], 95.00th=[ 108], 00:28:49.018 | 99.00th=[ 126], 99.50th=[ 143], 99.90th=[ 3294], 99.95th=[ 3621], 00:28:49.018 | 99.99th=[ 4146] 00:28:49.018 bw ( KiB/s): min=17328, max=44160, per=99.88%, avg=42078.74, stdev=6004.86, samples=19 00:28:49.018 iops : min= 4332, max=11040, avg=10519.68, stdev=1501.21, samples=19 00:28:49.018 lat (usec) : 100=91.97%, 250=7.60%, 500=0.01%, 750=0.02%, 1000=0.02% 00:28:49.018 lat (msec) : 2=0.13%, 4=0.23%, 10=0.02%, 20=0.01% 00:28:49.018 cpu : usr=2.70%, sys=7.29%, ctx=105331, majf=0, minf=796 00:28:49.018 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:49.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.018 issued rwts: total=0,105330,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:49.018 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:49.018 00:28:49.018 Run status group 0 (all jobs): 00:28:49.018 WRITE: bw=41.1MiB/s (43.1MB/s), 41.1MiB/s-41.1MiB/s (43.1MB/s-43.1MB/s), io=411MiB (431MB), run=10001-10001msec 00:28:49.018 00:28:49.018 Disk stats (read/write): 00:28:49.018 ublkb0: ios=0/104195, merge=0/0, ticks=0/8975, in_queue=8976, util=99.05% 00:28:49.018 11:42:52 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:28:49.018 11:42:52 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.018 11:42:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:49.018 [2024-11-20 11:42:52.567567] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:28:49.018 [2024-11-20 11:42:52.607269] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:28:49.018 [2024-11-20 11:42:52.608297] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:28:49.018 [2024-11-20 11:42:52.618747] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:28:49.018 [2024-11-20 11:42:52.619172] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:28:49.018 [2024-11-20 11:42:52.619239] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:28:49.018 11:42:52 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.018 11:42:52 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:28:49.018 11:42:52 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:28:49.018 11:42:52 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:28:49.018 11:42:52 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:49.018 11:42:52 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:49.018 11:42:52 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:49.018 11:42:52 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:49.018 11:42:52 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:28:49.018 11:42:52 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.018 11:42:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:49.018 [2024-11-20 11:42:52.642704] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:28:49.018 request: 00:28:49.018 { 00:28:49.018 "ublk_id": 0, 00:28:49.018 "method": "ublk_stop_disk", 00:28:49.018 "req_id": 1 00:28:49.018 } 00:28:49.018 Got JSON-RPC error response 00:28:49.018 response: 00:28:49.018 { 00:28:49.018 "code": -19, 00:28:49.018 "message": "No such device" 00:28:49.018 } 00:28:49.018 11:42:52 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:49.018 11:42:52 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:28:49.018 11:42:52 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:49.018 11:42:52 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:49.018 11:42:52 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:49.018 11:42:52 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:28:49.018 11:42:52 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.018 11:42:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:49.018 [2024-11-20 11:42:52.658716] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:28:49.018 [2024-11-20 11:42:52.666612] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:28:49.018 [2024-11-20 11:42:52.666666] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:28:49.018 11:42:52 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.018 11:42:52 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:49.018 11:42:52 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.018 11:42:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:49.018 11:42:53 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.018 11:42:53 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:28:49.018 11:42:53 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:28:49.018 11:42:53 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.018 11:42:53 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:49.018 11:42:53 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.018 11:42:53 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:28:49.018 11:42:53 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:28:49.018 11:42:53 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:28:49.018 11:42:53 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:28:49.018 11:42:53 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.018 11:42:53 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:49.018 11:42:53 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.018 11:42:53 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:28:49.018 11:42:53 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:28:49.018 ************************************ 00:28:49.018 END TEST test_create_ublk 00:28:49.018 ************************************ 00:28:49.018 11:42:53 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:28:49.018 00:28:49.018 real 0m11.811s 00:28:49.018 user 0m0.701s 00:28:49.018 sys 0m0.846s 00:28:49.018 11:42:53 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:49.018 11:42:53 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:49.018 11:42:53 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:28:49.018 11:42:53 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:49.018 11:42:53 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:49.018 11:42:53 ublk -- common/autotest_common.sh@10 -- # set +x 00:28:49.018 ************************************ 00:28:49.018 START TEST test_create_multi_ublk 00:28:49.018 ************************************ 00:28:49.018 11:42:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:28:49.018 11:42:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:28:49.018 11:42:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.018 11:42:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:49.018 [2024-11-20 11:42:53.558646] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:28:49.018 [2024-11-20 11:42:53.561634] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:28:49.018 11:42:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.018 11:42:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:28:49.018 11:42:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:28:49.018 11:42:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:28:49.018 11:42:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:28:49.018 11:42:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.018 11:42:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:49.018 11:42:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.018 11:42:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:28:49.018 11:42:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:28:49.018 11:42:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.018 11:42:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:49.018 [2024-11-20 11:42:53.866747] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:28:49.018 [2024-11-20 11:42:53.867300] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:28:49.018 [2024-11-20 11:42:53.867323] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:28:49.018 [2024-11-20 11:42:53.867340] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:28:49.018 [2024-11-20 11:42:53.874998] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:28:49.019 [2024-11-20 11:42:53.875030] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:28:49.019 [2024-11-20 11:42:53.882595] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:28:49.019 [2024-11-20 11:42:53.883363] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:28:49.019 [2024-11-20 11:42:53.893666] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:28:49.019 11:42:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.019 11:42:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:28:49.019 11:42:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:28:49.019 11:42:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:28:49.019 11:42:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.019 11:42:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:49.019 11:42:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.019 11:42:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:28:49.019 11:42:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:28:49.019 11:42:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.019 11:42:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:49.019 [2024-11-20 11:42:54.204824] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:28:49.019 [2024-11-20 11:42:54.205396] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:28:49.019 [2024-11-20 11:42:54.205424] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:28:49.019 [2024-11-20 11:42:54.205435] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:28:49.019 [2024-11-20 11:42:54.213185] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:28:49.019 [2024-11-20 11:42:54.213212] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:28:49.019 [2024-11-20 11:42:54.219632] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:28:49.019 [2024-11-20 11:42:54.220376] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:28:49.019 [2024-11-20 11:42:54.242637] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:28:49.019 11:42:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.019 11:42:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:28:49.019 11:42:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:28:49.019 11:42:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:28:49.019 11:42:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.019 11:42:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:49.019 11:42:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.019 11:42:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:28:49.019 11:42:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:28:49.019 11:42:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.019 11:42:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:49.019 [2024-11-20 11:42:54.542755] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:28:49.019 [2024-11-20 11:42:54.543275] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:28:49.019 [2024-11-20 11:42:54.543297] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:28:49.019 [2024-11-20 11:42:54.543310] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:28:49.019 [2024-11-20 11:42:54.550594] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:28:49.019 [2024-11-20 11:42:54.550629] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:28:49.019 [2024-11-20 11:42:54.558570] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:28:49.019 [2024-11-20 11:42:54.559380] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:28:49.019 [2024-11-20 11:42:54.567606] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:28:49.019 11:42:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.019 11:42:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:28:49.019 11:42:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:28:49.019 11:42:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:28:49.019 11:42:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.019 11:42:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:49.278 11:42:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.278 11:42:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:28:49.278 11:42:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:28:49.278 11:42:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.278 11:42:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:49.278 [2024-11-20 11:42:54.874792] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:28:49.278 [2024-11-20 11:42:54.875295] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:28:49.278 [2024-11-20 11:42:54.875322] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:28:49.278 [2024-11-20 11:42:54.875333] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:28:49.278 [2024-11-20 11:42:54.882592] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:28:49.278 [2024-11-20 11:42:54.882620] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:28:49.278 [2024-11-20 11:42:54.890607] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:28:49.278 [2024-11-20 11:42:54.891358] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:28:49.278 [2024-11-20 11:42:54.899659] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:28:49.278 11:42:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.278 11:42:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:28:49.278 11:42:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:28:49.278 11:42:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.278 11:42:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:49.278 11:42:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.278 11:42:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:28:49.278 { 00:28:49.278 "ublk_device": "/dev/ublkb0", 00:28:49.278 "id": 0, 00:28:49.278 "queue_depth": 512, 00:28:49.278 "num_queues": 4, 00:28:49.278 "bdev_name": "Malloc0" 00:28:49.278 }, 00:28:49.278 { 00:28:49.278 "ublk_device": "/dev/ublkb1", 00:28:49.278 "id": 1, 00:28:49.278 "queue_depth": 512, 00:28:49.278 "num_queues": 4, 00:28:49.278 "bdev_name": "Malloc1" 00:28:49.278 }, 00:28:49.278 { 00:28:49.278 "ublk_device": "/dev/ublkb2", 00:28:49.278 "id": 2, 00:28:49.278 "queue_depth": 512, 00:28:49.278 "num_queues": 4, 00:28:49.278 "bdev_name": "Malloc2" 00:28:49.278 }, 00:28:49.278 { 00:28:49.278 "ublk_device": "/dev/ublkb3", 00:28:49.278 "id": 3, 00:28:49.278 "queue_depth": 512, 00:28:49.278 "num_queues": 4, 00:28:49.278 "bdev_name": "Malloc3" 00:28:49.278 } 00:28:49.278 ]' 00:28:49.278 11:42:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:28:49.278 11:42:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:28:49.278 11:42:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:28:49.278 11:42:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:28:49.278 11:42:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:28:49.278 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:28:49.278 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:28:49.537 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:28:49.537 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:28:49.537 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:28:49.537 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:28:49.537 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:28:49.537 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:28:49.537 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:28:49.537 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:28:49.537 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:28:49.537 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:28:49.537 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:28:49.867 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:28:49.867 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:28:49.867 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:28:49.867 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:28:49.867 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:28:49.867 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:28:49.867 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:28:49.867 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:28:49.868 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:28:49.868 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:28:49.868 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:28:49.868 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:28:49.868 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:28:50.126 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:28:50.126 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:28:50.126 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:28:50.126 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:28:50.126 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:28:50.126 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:28:50.126 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:28:50.126 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:28:50.126 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:28:50.126 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:28:50.126 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:28:50.385 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:28:50.385 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:28:50.385 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:28:50.385 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:28:50.385 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:28:50.385 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:28:50.385 11:42:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:28:50.385 11:42:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.385 11:42:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:50.385 [2024-11-20 11:42:55.959856] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:28:50.385 [2024-11-20 11:42:56.003216] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:28:50.385 [2024-11-20 11:42:56.004460] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:28:50.385 [2024-11-20 11:42:56.010590] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:28:50.385 [2024-11-20 11:42:56.010909] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:28:50.385 [2024-11-20 11:42:56.010935] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:28:50.385 11:42:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.385 11:42:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:28:50.385 11:42:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:28:50.385 11:42:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.385 11:42:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:50.385 [2024-11-20 11:42:56.018677] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:28:50.385 [2024-11-20 11:42:56.061564] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:28:50.385 [2024-11-20 11:42:56.062609] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:28:50.385 [2024-11-20 11:42:56.070787] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:28:50.385 [2024-11-20 11:42:56.071133] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:28:50.385 [2024-11-20 11:42:56.071161] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:28:50.385 11:42:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.386 11:42:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:28:50.386 11:42:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:28:50.386 11:42:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.386 11:42:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:50.386 [2024-11-20 11:42:56.078705] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:28:50.386 [2024-11-20 11:42:56.133615] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:28:50.386 [2024-11-20 11:42:56.134547] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:28:50.386 [2024-11-20 11:42:56.140567] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:28:50.386 [2024-11-20 11:42:56.140890] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:28:50.386 [2024-11-20 11:42:56.140918] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:28:50.386 11:42:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.386 11:42:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:28:50.386 11:42:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:28:50.386 11:42:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.386 11:42:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:50.386 [2024-11-20 11:42:56.148725] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:28:50.644 [2024-11-20 11:42:56.192705] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:28:50.644 [2024-11-20 11:42:56.193611] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:28:50.644 [2024-11-20 11:42:56.202830] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:28:50.644 [2024-11-20 11:42:56.203171] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:28:50.644 [2024-11-20 11:42:56.203196] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:28:50.644 11:42:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.644 11:42:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:28:50.903 [2024-11-20 11:42:56.506747] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:28:50.903 [2024-11-20 11:42:56.514650] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:28:50.903 [2024-11-20 11:42:56.514702] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:28:50.903 11:42:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:28:50.903 11:42:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:28:50.903 11:42:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:50.903 11:42:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.903 11:42:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:51.470 11:42:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.470 11:42:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:28:51.470 11:42:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:51.470 11:42:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.470 11:42:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:52.407 11:42:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.407 11:42:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:28:52.407 11:42:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:28:52.407 11:42:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.407 11:42:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:52.666 11:42:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.666 11:42:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:28:52.666 11:42:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:28:52.666 11:42:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.666 11:42:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:52.928 11:42:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.928 11:42:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:28:52.928 11:42:58 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:28:52.928 11:42:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.928 11:42:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:52.928 11:42:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.928 11:42:58 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:28:52.928 11:42:58 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:28:53.186 11:42:58 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:28:53.186 11:42:58 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:28:53.186 11:42:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.186 11:42:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:53.186 11:42:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.186 11:42:58 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:28:53.186 11:42:58 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:28:53.186 ************************************ 00:28:53.186 END TEST test_create_multi_ublk 00:28:53.186 ************************************ 00:28:53.186 11:42:58 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:28:53.186 00:28:53.186 real 0m5.217s 00:28:53.186 user 0m1.346s 00:28:53.186 sys 0m0.167s 00:28:53.186 11:42:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:53.186 11:42:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:28:53.186 11:42:58 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:53.186 11:42:58 ublk -- ublk/ublk.sh@147 -- # cleanup 00:28:53.186 11:42:58 ublk -- ublk/ublk.sh@130 -- # killprocess 75491 00:28:53.186 11:42:58 ublk -- common/autotest_common.sh@954 -- # '[' -z 75491 ']' 00:28:53.186 11:42:58 ublk -- common/autotest_common.sh@958 -- # kill -0 75491 00:28:53.186 11:42:58 ublk -- common/autotest_common.sh@959 -- # uname 00:28:53.186 11:42:58 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:53.186 11:42:58 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75491 00:28:53.186 killing process with pid 75491 00:28:53.186 11:42:58 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:53.186 11:42:58 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:53.186 11:42:58 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75491' 00:28:53.186 11:42:58 ublk -- common/autotest_common.sh@973 -- # kill 75491 00:28:53.186 11:42:58 ublk -- common/autotest_common.sh@978 -- # wait 75491 00:28:54.564 [2024-11-20 11:42:59.925756] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:28:54.564 [2024-11-20 11:42:59.926056] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:28:55.500 00:28:55.500 real 0m32.061s 00:28:55.500 user 0m45.033s 00:28:55.500 sys 0m11.650s 00:28:55.500 11:43:01 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:55.500 ************************************ 00:28:55.500 END TEST ublk 00:28:55.500 ************************************ 00:28:55.500 11:43:01 ublk -- common/autotest_common.sh@10 -- # set +x 00:28:55.500 11:43:01 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:28:55.500 11:43:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:55.500 11:43:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:55.500 11:43:01 -- common/autotest_common.sh@10 -- # set +x 00:28:55.500 ************************************ 00:28:55.500 START TEST ublk_recovery 00:28:55.500 ************************************ 00:28:55.500 11:43:01 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:28:55.758 * Looking for test storage... 00:28:55.758 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:28:55.758 11:43:01 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:55.758 11:43:01 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:28:55.759 11:43:01 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:55.759 11:43:01 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:55.759 11:43:01 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:55.759 11:43:01 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:55.759 11:43:01 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:55.759 11:43:01 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:28:55.759 11:43:01 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:28:55.759 11:43:01 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:28:55.759 11:43:01 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:28:55.759 11:43:01 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:28:55.759 11:43:01 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:28:55.759 11:43:01 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:28:55.759 11:43:01 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:55.759 11:43:01 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:28:55.759 11:43:01 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:28:55.759 11:43:01 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:55.759 11:43:01 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:55.759 11:43:01 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:28:55.759 11:43:01 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:28:55.759 11:43:01 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:55.759 11:43:01 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:28:55.759 11:43:01 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:28:55.759 11:43:01 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:28:55.759 11:43:01 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:28:55.759 11:43:01 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:55.759 11:43:01 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:28:55.759 11:43:01 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:28:55.759 11:43:01 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:55.759 11:43:01 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:55.759 11:43:01 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:28:55.759 11:43:01 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:55.759 11:43:01 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:55.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.759 --rc genhtml_branch_coverage=1 00:28:55.759 --rc genhtml_function_coverage=1 00:28:55.759 --rc genhtml_legend=1 00:28:55.759 --rc geninfo_all_blocks=1 00:28:55.759 --rc geninfo_unexecuted_blocks=1 00:28:55.759 00:28:55.759 ' 00:28:55.759 11:43:01 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:55.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.759 --rc genhtml_branch_coverage=1 00:28:55.759 --rc genhtml_function_coverage=1 00:28:55.759 --rc genhtml_legend=1 00:28:55.759 --rc geninfo_all_blocks=1 00:28:55.759 --rc geninfo_unexecuted_blocks=1 00:28:55.759 00:28:55.759 ' 00:28:55.759 11:43:01 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:55.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.759 --rc genhtml_branch_coverage=1 00:28:55.759 --rc genhtml_function_coverage=1 00:28:55.759 --rc genhtml_legend=1 00:28:55.759 --rc geninfo_all_blocks=1 00:28:55.759 --rc geninfo_unexecuted_blocks=1 00:28:55.759 00:28:55.759 ' 00:28:55.759 11:43:01 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:55.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.759 --rc genhtml_branch_coverage=1 00:28:55.759 --rc genhtml_function_coverage=1 00:28:55.759 --rc genhtml_legend=1 00:28:55.759 --rc geninfo_all_blocks=1 00:28:55.759 --rc geninfo_unexecuted_blocks=1 00:28:55.759 00:28:55.759 ' 00:28:55.759 11:43:01 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:28:55.759 11:43:01 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:28:55.759 11:43:01 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:28:55.759 11:43:01 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:28:55.759 11:43:01 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:28:55.759 11:43:01 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:28:55.759 11:43:01 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:28:55.759 11:43:01 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:28:55.759 11:43:01 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:28:55.759 11:43:01 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:28:55.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:55.759 11:43:01 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=75918 00:28:55.759 11:43:01 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:55.759 11:43:01 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 75918 00:28:55.759 11:43:01 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:28:55.759 11:43:01 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75918 ']' 00:28:55.759 11:43:01 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.759 11:43:01 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:55.759 11:43:01 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:55.759 11:43:01 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:55.759 11:43:01 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:28:56.018 [2024-11-20 11:43:01.538288] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:28:56.018 [2024-11-20 11:43:01.538494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75918 ] 00:28:56.018 [2024-11-20 11:43:01.767880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:56.277 [2024-11-20 11:43:01.959858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.277 [2024-11-20 11:43:01.959869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:57.215 11:43:02 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:57.215 11:43:02 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:28:57.215 11:43:02 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:28:57.215 11:43:02 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.215 11:43:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:28:57.215 [2024-11-20 11:43:02.909561] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:28:57.215 [2024-11-20 11:43:02.912428] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:28:57.215 11:43:02 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.215 11:43:02 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:28:57.215 11:43:02 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.215 11:43:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:28:57.474 malloc0 00:28:57.474 11:43:03 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.474 11:43:03 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:28:57.474 11:43:03 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.474 11:43:03 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:28:57.474 [2024-11-20 11:43:03.062738] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:28:57.474 [2024-11-20 11:43:03.062876] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:28:57.474 [2024-11-20 11:43:03.062897] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:28:57.474 [2024-11-20 11:43:03.062912] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:28:57.474 [2024-11-20 11:43:03.070765] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:28:57.474 [2024-11-20 11:43:03.070788] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:28:57.474 [2024-11-20 11:43:03.078576] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:28:57.474 [2024-11-20 11:43:03.078750] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:28:57.474 [2024-11-20 11:43:03.093674] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:28:57.474 1 00:28:57.474 11:43:03 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.474 11:43:03 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:28:58.419 11:43:04 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=75954 00:28:58.419 11:43:04 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:28:58.419 11:43:04 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:28:58.688 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:28:58.688 fio-3.35 00:28:58.688 Starting 1 process 00:29:03.961 11:43:09 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 75918 00:29:03.961 11:43:09 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:29:09.228 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 75918 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:29:09.228 11:43:14 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=76065 00:29:09.228 11:43:14 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:29:09.228 11:43:14 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:09.228 11:43:14 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 76065 00:29:09.228 11:43:14 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76065 ']' 00:29:09.228 11:43:14 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:09.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:09.228 11:43:14 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:09.228 11:43:14 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:09.228 11:43:14 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:09.228 11:43:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:29:09.228 [2024-11-20 11:43:14.247351] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:29:09.228 [2024-11-20 11:43:14.247521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76065 ] 00:29:09.228 [2024-11-20 11:43:14.441678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:09.228 [2024-11-20 11:43:14.606688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:09.228 [2024-11-20 11:43:14.606701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.795 11:43:15 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:09.795 11:43:15 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:29:09.795 11:43:15 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:29:09.795 11:43:15 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.795 11:43:15 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:29:09.795 [2024-11-20 11:43:15.533564] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:29:09.795 [2024-11-20 11:43:15.536587] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:29:09.795 11:43:15 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.795 11:43:15 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:29:09.795 11:43:15 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.795 11:43:15 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:29:10.054 malloc0 00:29:10.055 11:43:15 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.055 11:43:15 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:29:10.055 11:43:15 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.055 11:43:15 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:29:10.055 [2024-11-20 11:43:15.702962] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:29:10.055 [2024-11-20 11:43:15.703056] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:29:10.055 [2024-11-20 11:43:15.703073] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:29:10.055 [2024-11-20 11:43:15.710720] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:29:10.055 [2024-11-20 11:43:15.710754] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:29:10.055 1 00:29:10.055 11:43:15 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.055 11:43:15 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 75954 00:29:10.991 [2024-11-20 11:43:16.710796] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:29:10.991 [2024-11-20 11:43:16.714638] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:29:10.991 [2024-11-20 11:43:16.714666] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:29:12.367 [2024-11-20 11:43:17.718643] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:29:12.367 [2024-11-20 11:43:17.726568] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:29:12.367 [2024-11-20 11:43:17.726599] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:29:13.304 [2024-11-20 11:43:18.726634] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:29:13.304 [2024-11-20 11:43:18.730608] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:29:13.304 [2024-11-20 11:43:18.730634] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:29:13.304 [2024-11-20 11:43:18.730650] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:29:13.304 [2024-11-20 11:43:18.730763] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:29:35.236 [2024-11-20 11:43:39.587631] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:29:35.236 [2024-11-20 11:43:39.595438] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:29:35.236 [2024-11-20 11:43:39.603007] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:29:35.236 [2024-11-20 11:43:39.603035] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:30:01.777 00:30:01.777 fio_test: (groupid=0, jobs=1): err= 0: pid=75960: Wed Nov 20 11:44:04 2024 00:30:01.777 read: IOPS=10.4k, BW=40.8MiB/s (42.8MB/s)(2447MiB/60002msec) 00:30:01.777 slat (usec): min=2, max=1781, avg= 6.26, stdev= 4.32 00:30:01.777 clat (usec): min=977, max=30506k, avg=6704.52, stdev=338174.32 00:30:01.777 lat (usec): min=994, max=30506k, avg=6710.78, stdev=338174.31 00:30:01.777 clat percentiles (msec): 00:30:01.777 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:30:01.777 | 30.00th=[ 3], 40.00th=[ 3], 50.00th=[ 3], 60.00th=[ 3], 00:30:01.777 | 70.00th=[ 3], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:30:01.777 | 99.00th=[ 7], 99.50th=[ 7], 99.90th=[ 9], 99.95th=[ 10], 00:30:01.777 | 99.99th=[17113] 00:30:01.777 bw ( KiB/s): min=20752, max=91992, per=100.00%, avg=83614.78, stdev=12493.14, samples=59 00:30:01.777 iops : min= 5188, max=22998, avg=20903.68, stdev=3123.28, samples=59 00:30:01.777 write: IOPS=10.4k, BW=40.7MiB/s (42.7MB/s)(2444MiB/60002msec); 0 zone resets 00:30:01.777 slat (usec): min=2, max=488, avg= 6.48, stdev= 3.51 00:30:01.777 clat (usec): min=972, max=30506k, avg=5548.29, stdev=275354.72 00:30:01.777 lat (usec): min=977, max=30506k, avg=5554.77, stdev=275354.72 00:30:01.777 clat percentiles (usec): 00:30:01.777 | 1.00th=[ 2474], 5.00th=[ 2638], 10.00th=[ 2704], 20.00th=[ 2769], 00:30:01.777 | 30.00th=[ 2802], 40.00th=[ 2868], 50.00th=[ 2933], 60.00th=[ 2966], 00:30:01.777 | 70.00th=[ 3064], 80.00th=[ 3163], 90.00th=[ 3392], 95.00th=[ 3916], 00:30:01.777 | 99.00th=[ 6259], 99.50th=[ 6849], 99.90th=[ 8094], 99.95th=[ 9503], 00:30:01.777 | 99.99th=[13435] 00:30:01.777 bw ( KiB/s): min=21536, max=91112, per=100.00%, avg=83531.05, stdev=12448.88, samples=59 00:30:01.777 iops : min= 5384, max=22778, avg=20882.75, stdev=3112.22, samples=59 00:30:01.777 lat (usec) : 1000=0.01% 00:30:01.777 lat (msec) : 2=0.15%, 4=94.95%, 10=4.86%, 20=0.03%, >=2000=0.01% 00:30:01.777 cpu : usr=5.84%, sys=12.29%, ctx=36172, majf=0, minf=13 00:30:01.777 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:30:01.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:01.777 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:01.777 issued rwts: total=626324,625760,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:01.777 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:01.777 00:30:01.777 Run status group 0 (all jobs): 00:30:01.777 READ: bw=40.8MiB/s (42.8MB/s), 40.8MiB/s-40.8MiB/s (42.8MB/s-42.8MB/s), io=2447MiB (2565MB), run=60002-60002msec 00:30:01.777 WRITE: bw=40.7MiB/s (42.7MB/s), 40.7MiB/s-40.7MiB/s (42.7MB/s-42.7MB/s), io=2444MiB (2563MB), run=60002-60002msec 00:30:01.777 00:30:01.777 Disk stats (read/write): 00:30:01.777 ublkb1: ios=623860/623285, merge=0/0, ticks=4131666/3334857, in_queue=7466523, util=99.94% 00:30:01.777 11:44:04 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:30:01.777 11:44:04 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.777 11:44:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.777 [2024-11-20 11:44:04.374408] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:30:01.777 [2024-11-20 11:44:04.403791] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:30:01.777 [2024-11-20 11:44:04.403996] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:30:01.777 [2024-11-20 11:44:04.411692] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:30:01.777 [2024-11-20 11:44:04.411824] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:30:01.777 [2024-11-20 11:44:04.411839] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:30:01.777 11:44:04 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.777 11:44:04 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:30:01.777 11:44:04 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.777 11:44:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.777 [2024-11-20 11:44:04.430701] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:30:01.777 [2024-11-20 11:44:04.436234] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:30:01.777 [2024-11-20 11:44:04.436312] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:30:01.777 11:44:04 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.777 11:44:04 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:30:01.777 11:44:04 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:30:01.777 11:44:04 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 76065 00:30:01.777 11:44:04 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 76065 ']' 00:30:01.777 11:44:04 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 76065 00:30:01.777 11:44:04 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:30:01.777 11:44:04 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:01.777 11:44:04 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76065 00:30:01.777 killing process with pid 76065 00:30:01.777 11:44:04 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:01.777 11:44:04 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:01.777 11:44:04 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76065' 00:30:01.777 11:44:04 ublk_recovery -- common/autotest_common.sh@973 -- # kill 76065 00:30:01.777 11:44:04 ublk_recovery -- common/autotest_common.sh@978 -- # wait 76065 00:30:01.777 [2024-11-20 11:44:06.102605] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:30:01.777 [2024-11-20 11:44:06.102663] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:30:01.777 00:30:01.777 real 1m6.087s 00:30:01.777 user 1m51.000s 00:30:01.777 sys 0m21.313s 00:30:01.777 11:44:07 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:01.777 ************************************ 00:30:01.777 END TEST ublk_recovery 00:30:01.777 ************************************ 00:30:01.777 11:44:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.777 11:44:07 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:30:01.777 11:44:07 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:30:01.777 11:44:07 -- spdk/autotest.sh@260 -- # timing_exit lib 00:30:01.777 11:44:07 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:01.777 11:44:07 -- common/autotest_common.sh@10 -- # set +x 00:30:01.777 11:44:07 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:30:01.777 11:44:07 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:30:01.777 11:44:07 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:30:01.777 11:44:07 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:30:01.777 11:44:07 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:30:01.777 11:44:07 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:30:01.777 11:44:07 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:30:01.777 11:44:07 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:30:01.777 11:44:07 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:30:01.777 11:44:07 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:30:01.777 11:44:07 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:30:01.778 11:44:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:01.778 11:44:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:01.778 11:44:07 -- common/autotest_common.sh@10 -- # set +x 00:30:01.778 ************************************ 00:30:01.778 START TEST ftl 00:30:01.778 ************************************ 00:30:01.778 11:44:07 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:30:01.778 * Looking for test storage... 00:30:01.778 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:01.778 11:44:07 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:01.778 11:44:07 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:30:01.778 11:44:07 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:01.778 11:44:07 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:01.778 11:44:07 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:01.778 11:44:07 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:01.778 11:44:07 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:01.778 11:44:07 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:30:01.778 11:44:07 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:30:01.778 11:44:07 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:30:01.778 11:44:07 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:30:01.778 11:44:07 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:30:01.778 11:44:07 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:30:01.778 11:44:07 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:30:01.778 11:44:07 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:01.778 11:44:07 ftl -- scripts/common.sh@344 -- # case "$op" in 00:30:01.778 11:44:07 ftl -- scripts/common.sh@345 -- # : 1 00:30:01.778 11:44:07 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:01.778 11:44:07 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:01.778 11:44:07 ftl -- scripts/common.sh@365 -- # decimal 1 00:30:01.778 11:44:07 ftl -- scripts/common.sh@353 -- # local d=1 00:30:01.778 11:44:07 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:01.778 11:44:07 ftl -- scripts/common.sh@355 -- # echo 1 00:30:01.778 11:44:07 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:30:01.778 11:44:07 ftl -- scripts/common.sh@366 -- # decimal 2 00:30:01.778 11:44:07 ftl -- scripts/common.sh@353 -- # local d=2 00:30:01.778 11:44:07 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:01.778 11:44:07 ftl -- scripts/common.sh@355 -- # echo 2 00:30:02.037 11:44:07 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:30:02.037 11:44:07 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:02.037 11:44:07 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:02.037 11:44:07 ftl -- scripts/common.sh@368 -- # return 0 00:30:02.037 11:44:07 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:02.037 11:44:07 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:02.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.037 --rc genhtml_branch_coverage=1 00:30:02.037 --rc genhtml_function_coverage=1 00:30:02.037 --rc genhtml_legend=1 00:30:02.037 --rc geninfo_all_blocks=1 00:30:02.037 --rc geninfo_unexecuted_blocks=1 00:30:02.037 00:30:02.037 ' 00:30:02.037 11:44:07 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:02.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.037 --rc genhtml_branch_coverage=1 00:30:02.037 --rc genhtml_function_coverage=1 00:30:02.037 --rc genhtml_legend=1 00:30:02.037 --rc geninfo_all_blocks=1 00:30:02.037 --rc geninfo_unexecuted_blocks=1 00:30:02.037 00:30:02.037 ' 00:30:02.037 11:44:07 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:02.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.037 --rc genhtml_branch_coverage=1 00:30:02.037 --rc genhtml_function_coverage=1 00:30:02.038 --rc genhtml_legend=1 00:30:02.038 --rc geninfo_all_blocks=1 00:30:02.038 --rc geninfo_unexecuted_blocks=1 00:30:02.038 00:30:02.038 ' 00:30:02.038 11:44:07 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:02.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.038 --rc genhtml_branch_coverage=1 00:30:02.038 --rc genhtml_function_coverage=1 00:30:02.038 --rc genhtml_legend=1 00:30:02.038 --rc geninfo_all_blocks=1 00:30:02.038 --rc geninfo_unexecuted_blocks=1 00:30:02.038 00:30:02.038 ' 00:30:02.038 11:44:07 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:02.038 11:44:07 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:30:02.038 11:44:07 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:02.038 11:44:07 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:02.038 11:44:07 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:02.038 11:44:07 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:02.038 11:44:07 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:02.038 11:44:07 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:02.038 11:44:07 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:02.038 11:44:07 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:02.038 11:44:07 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:02.038 11:44:07 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:02.038 11:44:07 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:02.038 11:44:07 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:02.038 11:44:07 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:02.038 11:44:07 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:02.038 11:44:07 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:02.038 11:44:07 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:02.038 11:44:07 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:02.038 11:44:07 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:02.038 11:44:07 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:02.038 11:44:07 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:02.038 11:44:07 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:02.038 11:44:07 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:02.038 11:44:07 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:02.038 11:44:07 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:02.038 11:44:07 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:02.038 11:44:07 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:02.038 11:44:07 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:02.038 11:44:07 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:02.038 11:44:07 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:30:02.038 11:44:07 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:30:02.038 11:44:07 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:30:02.038 11:44:07 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:30:02.038 11:44:07 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:02.297 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:02.297 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:02.297 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:02.298 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:02.298 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:02.558 11:44:08 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=76856 00:30:02.558 11:44:08 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:30:02.558 11:44:08 ftl -- ftl/ftl.sh@38 -- # waitforlisten 76856 00:30:02.558 11:44:08 ftl -- common/autotest_common.sh@835 -- # '[' -z 76856 ']' 00:30:02.558 11:44:08 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:02.558 11:44:08 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:02.558 11:44:08 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:02.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:02.558 11:44:08 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:02.558 11:44:08 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:02.558 [2024-11-20 11:44:08.209343] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:30:02.558 [2024-11-20 11:44:08.209888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76856 ] 00:30:02.816 [2024-11-20 11:44:08.401584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:02.816 [2024-11-20 11:44:08.555781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:03.752 11:44:09 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:03.753 11:44:09 ftl -- common/autotest_common.sh@868 -- # return 0 00:30:03.753 11:44:09 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:30:03.753 11:44:09 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:30:05.129 11:44:10 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:30:05.129 11:44:10 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:05.388 11:44:11 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:30:05.388 11:44:11 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:30:05.388 11:44:11 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:30:05.648 11:44:11 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:30:05.648 11:44:11 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:30:05.648 11:44:11 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:30:05.648 11:44:11 ftl -- ftl/ftl.sh@50 -- # break 00:30:05.648 11:44:11 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:30:05.648 11:44:11 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:30:05.648 11:44:11 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:30:05.648 11:44:11 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:30:05.909 11:44:11 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:30:05.909 11:44:11 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:30:05.909 11:44:11 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:30:05.909 11:44:11 ftl -- ftl/ftl.sh@63 -- # break 00:30:05.909 11:44:11 ftl -- ftl/ftl.sh@66 -- # killprocess 76856 00:30:05.909 11:44:11 ftl -- common/autotest_common.sh@954 -- # '[' -z 76856 ']' 00:30:05.909 11:44:11 ftl -- common/autotest_common.sh@958 -- # kill -0 76856 00:30:05.909 11:44:11 ftl -- common/autotest_common.sh@959 -- # uname 00:30:05.909 11:44:11 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:05.909 11:44:11 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76856 00:30:05.909 killing process with pid 76856 00:30:05.909 11:44:11 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:05.909 11:44:11 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:05.909 11:44:11 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76856' 00:30:05.909 11:44:11 ftl -- common/autotest_common.sh@973 -- # kill 76856 00:30:05.909 11:44:11 ftl -- common/autotest_common.sh@978 -- # wait 76856 00:30:07.814 11:44:13 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:30:07.814 11:44:13 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:30:07.814 11:44:13 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:30:07.814 11:44:13 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:07.814 11:44:13 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:07.814 ************************************ 00:30:07.814 START TEST ftl_fio_basic 00:30:07.814 ************************************ 00:30:07.814 11:44:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:30:08.098 * Looking for test storage... 00:30:08.098 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:08.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.098 --rc genhtml_branch_coverage=1 00:30:08.098 --rc genhtml_function_coverage=1 00:30:08.098 --rc genhtml_legend=1 00:30:08.098 --rc geninfo_all_blocks=1 00:30:08.098 --rc geninfo_unexecuted_blocks=1 00:30:08.098 00:30:08.098 ' 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:08.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.098 --rc genhtml_branch_coverage=1 00:30:08.098 --rc genhtml_function_coverage=1 00:30:08.098 --rc genhtml_legend=1 00:30:08.098 --rc geninfo_all_blocks=1 00:30:08.098 --rc geninfo_unexecuted_blocks=1 00:30:08.098 00:30:08.098 ' 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:08.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.098 --rc genhtml_branch_coverage=1 00:30:08.098 --rc genhtml_function_coverage=1 00:30:08.098 --rc genhtml_legend=1 00:30:08.098 --rc geninfo_all_blocks=1 00:30:08.098 --rc geninfo_unexecuted_blocks=1 00:30:08.098 00:30:08.098 ' 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:08.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.098 --rc genhtml_branch_coverage=1 00:30:08.098 --rc genhtml_function_coverage=1 00:30:08.098 --rc genhtml_legend=1 00:30:08.098 --rc geninfo_all_blocks=1 00:30:08.098 --rc geninfo_unexecuted_blocks=1 00:30:08.098 00:30:08.098 ' 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:08.098 11:44:13 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:08.099 11:44:13 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:08.099 11:44:13 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:08.099 11:44:13 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:08.099 11:44:13 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:08.099 11:44:13 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:08.099 11:44:13 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:08.099 11:44:13 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:08.099 11:44:13 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:08.099 11:44:13 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:08.099 11:44:13 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:08.099 11:44:13 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:08.099 11:44:13 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:08.099 11:44:13 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:30:08.099 11:44:13 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:30:08.099 11:44:13 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:30:08.099 11:44:13 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:30:08.099 11:44:13 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:08.099 11:44:13 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:30:08.099 11:44:13 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:30:08.099 11:44:13 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:30:08.099 11:44:13 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:30:08.099 11:44:13 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:30:08.099 11:44:13 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:30:08.099 11:44:13 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:30:08.099 11:44:13 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:30:08.099 11:44:13 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:30:08.099 11:44:13 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:08.099 11:44:13 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:08.099 11:44:13 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:30:08.099 11:44:13 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=77000 00:30:08.099 11:44:13 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:30:08.099 11:44:13 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 77000 00:30:08.099 11:44:13 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 77000 ']' 00:30:08.099 11:44:13 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:08.099 11:44:13 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:08.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:08.099 11:44:13 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:08.099 11:44:13 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:08.099 11:44:13 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:30:08.379 [2024-11-20 11:44:13.893707] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:30:08.379 [2024-11-20 11:44:13.893885] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77000 ] 00:30:08.379 [2024-11-20 11:44:14.080632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:08.638 [2024-11-20 11:44:14.200360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:08.638 [2024-11-20 11:44:14.200514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:08.638 [2024-11-20 11:44:14.200577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:09.573 11:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:09.573 11:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:30:09.573 11:44:15 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:30:09.573 11:44:15 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:30:09.573 11:44:15 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:09.573 11:44:15 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:30:09.573 11:44:15 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:30:09.573 11:44:15 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:09.832 11:44:15 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:30:09.832 11:44:15 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:30:09.832 11:44:15 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:30:09.832 11:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:30:09.832 11:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:09.832 11:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:30:09.832 11:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:30:09.832 11:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:30:10.091 11:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:10.091 { 00:30:10.091 "name": "nvme0n1", 00:30:10.091 "aliases": [ 00:30:10.091 "d8f0b7a0-de71-40a6-a588-fa852141bfe6" 00:30:10.091 ], 00:30:10.091 "product_name": "NVMe disk", 00:30:10.091 "block_size": 4096, 00:30:10.091 "num_blocks": 1310720, 00:30:10.091 "uuid": "d8f0b7a0-de71-40a6-a588-fa852141bfe6", 00:30:10.091 "numa_id": -1, 00:30:10.091 "assigned_rate_limits": { 00:30:10.091 "rw_ios_per_sec": 0, 00:30:10.091 "rw_mbytes_per_sec": 0, 00:30:10.091 "r_mbytes_per_sec": 0, 00:30:10.091 "w_mbytes_per_sec": 0 00:30:10.091 }, 00:30:10.091 "claimed": false, 00:30:10.091 "zoned": false, 00:30:10.091 "supported_io_types": { 00:30:10.091 "read": true, 00:30:10.091 "write": true, 00:30:10.091 "unmap": true, 00:30:10.091 "flush": true, 00:30:10.091 "reset": true, 00:30:10.091 "nvme_admin": true, 00:30:10.091 "nvme_io": true, 00:30:10.091 "nvme_io_md": false, 00:30:10.091 "write_zeroes": true, 00:30:10.091 "zcopy": false, 00:30:10.091 "get_zone_info": false, 00:30:10.091 "zone_management": false, 00:30:10.091 "zone_append": false, 00:30:10.091 "compare": true, 00:30:10.091 "compare_and_write": false, 00:30:10.091 "abort": true, 00:30:10.091 "seek_hole": false, 00:30:10.091 "seek_data": false, 00:30:10.091 "copy": true, 00:30:10.091 "nvme_iov_md": false 00:30:10.091 }, 00:30:10.091 "driver_specific": { 00:30:10.091 "nvme": [ 00:30:10.091 { 00:30:10.091 "pci_address": "0000:00:11.0", 00:30:10.091 "trid": { 00:30:10.091 "trtype": "PCIe", 00:30:10.091 "traddr": "0000:00:11.0" 00:30:10.091 }, 00:30:10.091 "ctrlr_data": { 00:30:10.091 "cntlid": 0, 00:30:10.091 "vendor_id": "0x1b36", 00:30:10.091 "model_number": "QEMU NVMe Ctrl", 00:30:10.091 "serial_number": "12341", 00:30:10.091 "firmware_revision": "8.0.0", 00:30:10.091 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:10.091 "oacs": { 00:30:10.091 "security": 0, 00:30:10.091 "format": 1, 00:30:10.091 "firmware": 0, 00:30:10.091 "ns_manage": 1 00:30:10.091 }, 00:30:10.091 "multi_ctrlr": false, 00:30:10.091 "ana_reporting": false 00:30:10.091 }, 00:30:10.091 "vs": { 00:30:10.091 "nvme_version": "1.4" 00:30:10.091 }, 00:30:10.091 "ns_data": { 00:30:10.091 "id": 1, 00:30:10.091 "can_share": false 00:30:10.091 } 00:30:10.091 } 00:30:10.091 ], 00:30:10.091 "mp_policy": "active_passive" 00:30:10.091 } 00:30:10.091 } 00:30:10.091 ]' 00:30:10.091 11:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:10.091 11:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:30:10.091 11:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:10.091 11:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:30:10.091 11:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:30:10.091 11:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:30:10.091 11:44:15 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:30:10.091 11:44:15 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:30:10.091 11:44:15 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:30:10.091 11:44:15 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:10.091 11:44:15 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:10.350 11:44:16 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:30:10.350 11:44:16 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:30:10.608 11:44:16 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=4382d387-958d-420f-b239-c0c3ac9f5778 00:30:10.608 11:44:16 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 4382d387-958d-420f-b239-c0c3ac9f5778 00:30:10.866 11:44:16 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=fe1a82bf-66fe-462f-99df-ee116b3aa015 00:30:10.867 11:44:16 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 fe1a82bf-66fe-462f-99df-ee116b3aa015 00:30:10.867 11:44:16 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:30:10.867 11:44:16 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:10.867 11:44:16 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=fe1a82bf-66fe-462f-99df-ee116b3aa015 00:30:10.867 11:44:16 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:30:10.867 11:44:16 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size fe1a82bf-66fe-462f-99df-ee116b3aa015 00:30:10.867 11:44:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=fe1a82bf-66fe-462f-99df-ee116b3aa015 00:30:10.867 11:44:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:10.867 11:44:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:30:10.867 11:44:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:30:10.867 11:44:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fe1a82bf-66fe-462f-99df-ee116b3aa015 00:30:11.124 11:44:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:11.124 { 00:30:11.124 "name": "fe1a82bf-66fe-462f-99df-ee116b3aa015", 00:30:11.124 "aliases": [ 00:30:11.124 "lvs/nvme0n1p0" 00:30:11.124 ], 00:30:11.124 "product_name": "Logical Volume", 00:30:11.124 "block_size": 4096, 00:30:11.124 "num_blocks": 26476544, 00:30:11.124 "uuid": "fe1a82bf-66fe-462f-99df-ee116b3aa015", 00:30:11.124 "assigned_rate_limits": { 00:30:11.124 "rw_ios_per_sec": 0, 00:30:11.124 "rw_mbytes_per_sec": 0, 00:30:11.124 "r_mbytes_per_sec": 0, 00:30:11.124 "w_mbytes_per_sec": 0 00:30:11.124 }, 00:30:11.124 "claimed": false, 00:30:11.124 "zoned": false, 00:30:11.124 "supported_io_types": { 00:30:11.125 "read": true, 00:30:11.125 "write": true, 00:30:11.125 "unmap": true, 00:30:11.125 "flush": false, 00:30:11.125 "reset": true, 00:30:11.125 "nvme_admin": false, 00:30:11.125 "nvme_io": false, 00:30:11.125 "nvme_io_md": false, 00:30:11.125 "write_zeroes": true, 00:30:11.125 "zcopy": false, 00:30:11.125 "get_zone_info": false, 00:30:11.125 "zone_management": false, 00:30:11.125 "zone_append": false, 00:30:11.125 "compare": false, 00:30:11.125 "compare_and_write": false, 00:30:11.125 "abort": false, 00:30:11.125 "seek_hole": true, 00:30:11.125 "seek_data": true, 00:30:11.125 "copy": false, 00:30:11.125 "nvme_iov_md": false 00:30:11.125 }, 00:30:11.125 "driver_specific": { 00:30:11.125 "lvol": { 00:30:11.125 "lvol_store_uuid": "4382d387-958d-420f-b239-c0c3ac9f5778", 00:30:11.125 "base_bdev": "nvme0n1", 00:30:11.125 "thin_provision": true, 00:30:11.125 "num_allocated_clusters": 0, 00:30:11.125 "snapshot": false, 00:30:11.125 "clone": false, 00:30:11.125 "esnap_clone": false 00:30:11.125 } 00:30:11.125 } 00:30:11.125 } 00:30:11.125 ]' 00:30:11.125 11:44:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:11.382 11:44:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:30:11.382 11:44:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:11.382 11:44:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:11.382 11:44:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:11.382 11:44:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:30:11.382 11:44:16 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:30:11.382 11:44:16 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:30:11.382 11:44:16 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:30:11.639 11:44:17 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:30:11.639 11:44:17 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:30:11.639 11:44:17 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size fe1a82bf-66fe-462f-99df-ee116b3aa015 00:30:11.639 11:44:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=fe1a82bf-66fe-462f-99df-ee116b3aa015 00:30:11.639 11:44:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:11.639 11:44:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:30:11.639 11:44:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:30:11.639 11:44:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fe1a82bf-66fe-462f-99df-ee116b3aa015 00:30:11.897 11:44:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:11.897 { 00:30:11.897 "name": "fe1a82bf-66fe-462f-99df-ee116b3aa015", 00:30:11.897 "aliases": [ 00:30:11.897 "lvs/nvme0n1p0" 00:30:11.898 ], 00:30:11.898 "product_name": "Logical Volume", 00:30:11.898 "block_size": 4096, 00:30:11.898 "num_blocks": 26476544, 00:30:11.898 "uuid": "fe1a82bf-66fe-462f-99df-ee116b3aa015", 00:30:11.898 "assigned_rate_limits": { 00:30:11.898 "rw_ios_per_sec": 0, 00:30:11.898 "rw_mbytes_per_sec": 0, 00:30:11.898 "r_mbytes_per_sec": 0, 00:30:11.898 "w_mbytes_per_sec": 0 00:30:11.898 }, 00:30:11.898 "claimed": false, 00:30:11.898 "zoned": false, 00:30:11.898 "supported_io_types": { 00:30:11.898 "read": true, 00:30:11.898 "write": true, 00:30:11.898 "unmap": true, 00:30:11.898 "flush": false, 00:30:11.898 "reset": true, 00:30:11.898 "nvme_admin": false, 00:30:11.898 "nvme_io": false, 00:30:11.898 "nvme_io_md": false, 00:30:11.898 "write_zeroes": true, 00:30:11.898 "zcopy": false, 00:30:11.898 "get_zone_info": false, 00:30:11.898 "zone_management": false, 00:30:11.898 "zone_append": false, 00:30:11.898 "compare": false, 00:30:11.898 "compare_and_write": false, 00:30:11.898 "abort": false, 00:30:11.898 "seek_hole": true, 00:30:11.898 "seek_data": true, 00:30:11.898 "copy": false, 00:30:11.898 "nvme_iov_md": false 00:30:11.898 }, 00:30:11.898 "driver_specific": { 00:30:11.898 "lvol": { 00:30:11.898 "lvol_store_uuid": "4382d387-958d-420f-b239-c0c3ac9f5778", 00:30:11.898 "base_bdev": "nvme0n1", 00:30:11.898 "thin_provision": true, 00:30:11.898 "num_allocated_clusters": 0, 00:30:11.898 "snapshot": false, 00:30:11.898 "clone": false, 00:30:11.898 "esnap_clone": false 00:30:11.898 } 00:30:11.898 } 00:30:11.898 } 00:30:11.898 ]' 00:30:11.898 11:44:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:11.898 11:44:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:30:11.898 11:44:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:12.156 11:44:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:12.156 11:44:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:12.156 11:44:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:30:12.156 11:44:17 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:30:12.156 11:44:17 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:30:12.414 11:44:17 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:30:12.414 11:44:17 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:30:12.414 11:44:17 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:30:12.414 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:30:12.414 11:44:17 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size fe1a82bf-66fe-462f-99df-ee116b3aa015 00:30:12.414 11:44:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=fe1a82bf-66fe-462f-99df-ee116b3aa015 00:30:12.414 11:44:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:12.414 11:44:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:30:12.414 11:44:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:30:12.414 11:44:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fe1a82bf-66fe-462f-99df-ee116b3aa015 00:30:12.673 11:44:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:12.673 { 00:30:12.673 "name": "fe1a82bf-66fe-462f-99df-ee116b3aa015", 00:30:12.673 "aliases": [ 00:30:12.673 "lvs/nvme0n1p0" 00:30:12.673 ], 00:30:12.673 "product_name": "Logical Volume", 00:30:12.673 "block_size": 4096, 00:30:12.673 "num_blocks": 26476544, 00:30:12.673 "uuid": "fe1a82bf-66fe-462f-99df-ee116b3aa015", 00:30:12.673 "assigned_rate_limits": { 00:30:12.673 "rw_ios_per_sec": 0, 00:30:12.673 "rw_mbytes_per_sec": 0, 00:30:12.673 "r_mbytes_per_sec": 0, 00:30:12.673 "w_mbytes_per_sec": 0 00:30:12.673 }, 00:30:12.673 "claimed": false, 00:30:12.673 "zoned": false, 00:30:12.673 "supported_io_types": { 00:30:12.673 "read": true, 00:30:12.673 "write": true, 00:30:12.673 "unmap": true, 00:30:12.673 "flush": false, 00:30:12.673 "reset": true, 00:30:12.673 "nvme_admin": false, 00:30:12.673 "nvme_io": false, 00:30:12.673 "nvme_io_md": false, 00:30:12.673 "write_zeroes": true, 00:30:12.673 "zcopy": false, 00:30:12.673 "get_zone_info": false, 00:30:12.673 "zone_management": false, 00:30:12.673 "zone_append": false, 00:30:12.673 "compare": false, 00:30:12.673 "compare_and_write": false, 00:30:12.673 "abort": false, 00:30:12.673 "seek_hole": true, 00:30:12.673 "seek_data": true, 00:30:12.673 "copy": false, 00:30:12.673 "nvme_iov_md": false 00:30:12.673 }, 00:30:12.673 "driver_specific": { 00:30:12.673 "lvol": { 00:30:12.673 "lvol_store_uuid": "4382d387-958d-420f-b239-c0c3ac9f5778", 00:30:12.673 "base_bdev": "nvme0n1", 00:30:12.673 "thin_provision": true, 00:30:12.673 "num_allocated_clusters": 0, 00:30:12.673 "snapshot": false, 00:30:12.673 "clone": false, 00:30:12.673 "esnap_clone": false 00:30:12.673 } 00:30:12.673 } 00:30:12.673 } 00:30:12.673 ]' 00:30:12.673 11:44:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:12.673 11:44:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:30:12.673 11:44:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:12.673 11:44:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:12.673 11:44:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:12.673 11:44:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:30:12.673 11:44:18 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:30:12.673 11:44:18 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:30:12.673 11:44:18 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d fe1a82bf-66fe-462f-99df-ee116b3aa015 -c nvc0n1p0 --l2p_dram_limit 60 00:30:12.933 [2024-11-20 11:44:18.627551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.933 [2024-11-20 11:44:18.627925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:12.933 [2024-11-20 11:44:18.627975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:12.933 [2024-11-20 11:44:18.627994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.933 [2024-11-20 11:44:18.628143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.933 [2024-11-20 11:44:18.628172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:12.933 [2024-11-20 11:44:18.628194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:30:12.933 [2024-11-20 11:44:18.628210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.933 [2024-11-20 11:44:18.628312] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:12.933 [2024-11-20 11:44:18.629952] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:12.933 [2024-11-20 11:44:18.630118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.933 [2024-11-20 11:44:18.630166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:12.933 [2024-11-20 11:44:18.630204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.814 ms 00:30:12.933 [2024-11-20 11:44:18.630231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.933 [2024-11-20 11:44:18.630599] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a52af1f4-0851-4b4e-9ab5-2148c6f084f7 00:30:12.933 [2024-11-20 11:44:18.633679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.933 [2024-11-20 11:44:18.633789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:30:12.933 [2024-11-20 11:44:18.633828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:30:12.933 [2024-11-20 11:44:18.633860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.933 [2024-11-20 11:44:18.645956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.933 [2024-11-20 11:44:18.646080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:12.933 [2024-11-20 11:44:18.646107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.857 ms 00:30:12.933 [2024-11-20 11:44:18.646126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.933 [2024-11-20 11:44:18.646357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.933 [2024-11-20 11:44:18.646397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:12.933 [2024-11-20 11:44:18.646424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:30:12.933 [2024-11-20 11:44:18.646457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.933 [2024-11-20 11:44:18.646685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.933 [2024-11-20 11:44:18.646725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:12.933 [2024-11-20 11:44:18.646745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:30:12.933 [2024-11-20 11:44:18.646763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.933 [2024-11-20 11:44:18.646819] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:12.933 [2024-11-20 11:44:18.653500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.933 [2024-11-20 11:44:18.653591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:12.933 [2024-11-20 11:44:18.653629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.694 ms 00:30:12.933 [2024-11-20 11:44:18.653649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.933 [2024-11-20 11:44:18.653728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.933 [2024-11-20 11:44:18.653748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:12.933 [2024-11-20 11:44:18.653768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:30:12.933 [2024-11-20 11:44:18.653783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.933 [2024-11-20 11:44:18.653876] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:30:12.933 [2024-11-20 11:44:18.654132] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:12.933 [2024-11-20 11:44:18.654174] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:12.933 [2024-11-20 11:44:18.654196] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:12.933 [2024-11-20 11:44:18.654234] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:12.933 [2024-11-20 11:44:18.654253] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:12.933 [2024-11-20 11:44:18.654273] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:12.934 [2024-11-20 11:44:18.654288] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:12.934 [2024-11-20 11:44:18.654306] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:12.934 [2024-11-20 11:44:18.654322] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:12.934 [2024-11-20 11:44:18.654342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.934 [2024-11-20 11:44:18.654365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:12.934 [2024-11-20 11:44:18.654386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.470 ms 00:30:12.934 [2024-11-20 11:44:18.654402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.934 [2024-11-20 11:44:18.654601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.934 [2024-11-20 11:44:18.654625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:12.934 [2024-11-20 11:44:18.654645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:30:12.934 [2024-11-20 11:44:18.654660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.934 [2024-11-20 11:44:18.654821] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:12.934 [2024-11-20 11:44:18.654854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:12.934 [2024-11-20 11:44:18.654878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:12.934 [2024-11-20 11:44:18.654893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:12.934 [2024-11-20 11:44:18.654911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:12.934 [2024-11-20 11:44:18.654938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:12.934 [2024-11-20 11:44:18.654967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:12.934 [2024-11-20 11:44:18.654982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:12.934 [2024-11-20 11:44:18.654999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:12.934 [2024-11-20 11:44:18.655012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:12.934 [2024-11-20 11:44:18.655029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:12.934 [2024-11-20 11:44:18.655043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:12.934 [2024-11-20 11:44:18.655069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:12.934 [2024-11-20 11:44:18.655084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:12.934 [2024-11-20 11:44:18.655100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:12.934 [2024-11-20 11:44:18.655114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:12.934 [2024-11-20 11:44:18.655144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:12.934 [2024-11-20 11:44:18.655160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:12.934 [2024-11-20 11:44:18.655176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:12.934 [2024-11-20 11:44:18.655190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:12.934 [2024-11-20 11:44:18.655219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:12.934 [2024-11-20 11:44:18.655234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:12.934 [2024-11-20 11:44:18.655251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:12.934 [2024-11-20 11:44:18.655266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:12.934 [2024-11-20 11:44:18.655282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:12.934 [2024-11-20 11:44:18.655308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:12.934 [2024-11-20 11:44:18.655325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:12.934 [2024-11-20 11:44:18.655338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:12.934 [2024-11-20 11:44:18.655355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:12.934 [2024-11-20 11:44:18.655380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:12.934 [2024-11-20 11:44:18.655396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:12.934 [2024-11-20 11:44:18.655410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:12.934 [2024-11-20 11:44:18.655430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:12.934 [2024-11-20 11:44:18.655445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:12.934 [2024-11-20 11:44:18.655462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:12.934 [2024-11-20 11:44:18.655502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:12.934 [2024-11-20 11:44:18.655520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:12.934 [2024-11-20 11:44:18.655560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:12.934 [2024-11-20 11:44:18.655582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:12.934 [2024-11-20 11:44:18.655596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:12.934 [2024-11-20 11:44:18.655640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:12.934 [2024-11-20 11:44:18.655657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:12.934 [2024-11-20 11:44:18.655676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:12.934 [2024-11-20 11:44:18.655690] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:12.934 [2024-11-20 11:44:18.655710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:12.934 [2024-11-20 11:44:18.655747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:12.934 [2024-11-20 11:44:18.655787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:12.934 [2024-11-20 11:44:18.655802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:12.934 [2024-11-20 11:44:18.655828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:12.934 [2024-11-20 11:44:18.655849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:12.934 [2024-11-20 11:44:18.655868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:12.934 [2024-11-20 11:44:18.655886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:12.934 [2024-11-20 11:44:18.655904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:12.934 [2024-11-20 11:44:18.655925] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:12.934 [2024-11-20 11:44:18.655949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:12.934 [2024-11-20 11:44:18.655966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:12.934 [2024-11-20 11:44:18.655985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:12.934 [2024-11-20 11:44:18.656000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:12.934 [2024-11-20 11:44:18.656028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:12.934 [2024-11-20 11:44:18.656043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:12.935 [2024-11-20 11:44:18.656061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:12.935 [2024-11-20 11:44:18.656076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:12.935 [2024-11-20 11:44:18.656093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:12.935 [2024-11-20 11:44:18.656108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:12.935 [2024-11-20 11:44:18.656129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:12.935 [2024-11-20 11:44:18.656144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:12.935 [2024-11-20 11:44:18.656171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:12.935 [2024-11-20 11:44:18.656185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:12.935 [2024-11-20 11:44:18.656204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:12.935 [2024-11-20 11:44:18.656230] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:12.935 [2024-11-20 11:44:18.656269] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:12.935 [2024-11-20 11:44:18.656291] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:12.935 [2024-11-20 11:44:18.656310] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:12.935 [2024-11-20 11:44:18.656325] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:12.935 [2024-11-20 11:44:18.656344] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:12.935 [2024-11-20 11:44:18.656361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.935 [2024-11-20 11:44:18.656379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:12.935 [2024-11-20 11:44:18.656395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.631 ms 00:30:12.935 [2024-11-20 11:44:18.656413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.935 [2024-11-20 11:44:18.656570] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:30:12.935 [2024-11-20 11:44:18.656615] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:30:16.226 [2024-11-20 11:44:21.793698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.226 [2024-11-20 11:44:21.793966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:30:16.226 [2024-11-20 11:44:21.794131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3137.140 ms 00:30:16.226 [2024-11-20 11:44:21.794193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.226 [2024-11-20 11:44:21.834342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.226 [2024-11-20 11:44:21.834618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:16.226 [2024-11-20 11:44:21.834770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.768 ms 00:30:16.226 [2024-11-20 11:44:21.834831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.226 [2024-11-20 11:44:21.835092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.226 [2024-11-20 11:44:21.835165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:16.226 [2024-11-20 11:44:21.835318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:30:16.226 [2024-11-20 11:44:21.835353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.226 [2024-11-20 11:44:21.890724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.226 [2024-11-20 11:44:21.890804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:16.226 [2024-11-20 11:44:21.890845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.287 ms 00:30:16.226 [2024-11-20 11:44:21.890863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.226 [2024-11-20 11:44:21.890926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.226 [2024-11-20 11:44:21.890946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:16.226 [2024-11-20 11:44:21.890961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:16.226 [2024-11-20 11:44:21.890976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.226 [2024-11-20 11:44:21.891701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.226 [2024-11-20 11:44:21.891733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:16.226 [2024-11-20 11:44:21.891749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.604 ms 00:30:16.226 [2024-11-20 11:44:21.891768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.226 [2024-11-20 11:44:21.891979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.226 [2024-11-20 11:44:21.892010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:16.226 [2024-11-20 11:44:21.892025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:30:16.226 [2024-11-20 11:44:21.892042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.226 [2024-11-20 11:44:21.914320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.226 [2024-11-20 11:44:21.914374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:16.226 [2024-11-20 11:44:21.914409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.243 ms 00:30:16.226 [2024-11-20 11:44:21.914424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.226 [2024-11-20 11:44:21.929117] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:30:16.226 [2024-11-20 11:44:21.951560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.226 [2024-11-20 11:44:21.951652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:16.226 [2024-11-20 11:44:21.951679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.952 ms 00:30:16.226 [2024-11-20 11:44:21.951696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.485 [2024-11-20 11:44:22.017516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.485 [2024-11-20 11:44:22.017615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:30:16.485 [2024-11-20 11:44:22.017648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.721 ms 00:30:16.485 [2024-11-20 11:44:22.017662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.485 [2024-11-20 11:44:22.017963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.485 [2024-11-20 11:44:22.017991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:16.485 [2024-11-20 11:44:22.018012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:30:16.485 [2024-11-20 11:44:22.018025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.485 [2024-11-20 11:44:22.049125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.485 [2024-11-20 11:44:22.049340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:30:16.485 [2024-11-20 11:44:22.049379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.009 ms 00:30:16.485 [2024-11-20 11:44:22.049394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.485 [2024-11-20 11:44:22.079518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.485 [2024-11-20 11:44:22.079602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:30:16.485 [2024-11-20 11:44:22.079628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.058 ms 00:30:16.485 [2024-11-20 11:44:22.079641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.485 [2024-11-20 11:44:22.080528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.485 [2024-11-20 11:44:22.080573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:16.485 [2024-11-20 11:44:22.080609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.827 ms 00:30:16.485 [2024-11-20 11:44:22.080622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.485 [2024-11-20 11:44:22.165352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.485 [2024-11-20 11:44:22.165612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:30:16.485 [2024-11-20 11:44:22.165663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.646 ms 00:30:16.485 [2024-11-20 11:44:22.165688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.485 [2024-11-20 11:44:22.198284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.485 [2024-11-20 11:44:22.198362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:30:16.485 [2024-11-20 11:44:22.198404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.441 ms 00:30:16.485 [2024-11-20 11:44:22.198418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.485 [2024-11-20 11:44:22.231953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.485 [2024-11-20 11:44:22.232029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:30:16.485 [2024-11-20 11:44:22.232071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.438 ms 00:30:16.485 [2024-11-20 11:44:22.232085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.744 [2024-11-20 11:44:22.263593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.744 [2024-11-20 11:44:22.263783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:16.744 [2024-11-20 11:44:22.263820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.439 ms 00:30:16.744 [2024-11-20 11:44:22.263835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.744 [2024-11-20 11:44:22.263907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.744 [2024-11-20 11:44:22.263926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:16.744 [2024-11-20 11:44:22.263946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:16.744 [2024-11-20 11:44:22.263962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.744 [2024-11-20 11:44:22.264168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.744 [2024-11-20 11:44:22.264193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:16.744 [2024-11-20 11:44:22.264211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:30:16.744 [2024-11-20 11:44:22.264224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.744 [2024-11-20 11:44:22.265790] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3637.659 ms, result 0 00:30:16.744 { 00:30:16.744 "name": "ftl0", 00:30:16.744 "uuid": "a52af1f4-0851-4b4e-9ab5-2148c6f084f7" 00:30:16.744 } 00:30:16.744 11:44:22 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:30:16.744 11:44:22 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:30:16.744 11:44:22 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:16.744 11:44:22 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:30:16.744 11:44:22 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:16.744 11:44:22 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:16.744 11:44:22 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:17.002 11:44:22 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:30:17.260 [ 00:30:17.260 { 00:30:17.260 "name": "ftl0", 00:30:17.260 "aliases": [ 00:30:17.260 "a52af1f4-0851-4b4e-9ab5-2148c6f084f7" 00:30:17.261 ], 00:30:17.261 "product_name": "FTL disk", 00:30:17.261 "block_size": 4096, 00:30:17.261 "num_blocks": 20971520, 00:30:17.261 "uuid": "a52af1f4-0851-4b4e-9ab5-2148c6f084f7", 00:30:17.261 "assigned_rate_limits": { 00:30:17.261 "rw_ios_per_sec": 0, 00:30:17.261 "rw_mbytes_per_sec": 0, 00:30:17.261 "r_mbytes_per_sec": 0, 00:30:17.261 "w_mbytes_per_sec": 0 00:30:17.261 }, 00:30:17.261 "claimed": false, 00:30:17.261 "zoned": false, 00:30:17.261 "supported_io_types": { 00:30:17.261 "read": true, 00:30:17.261 "write": true, 00:30:17.261 "unmap": true, 00:30:17.261 "flush": true, 00:30:17.261 "reset": false, 00:30:17.261 "nvme_admin": false, 00:30:17.261 "nvme_io": false, 00:30:17.261 "nvme_io_md": false, 00:30:17.261 "write_zeroes": true, 00:30:17.261 "zcopy": false, 00:30:17.261 "get_zone_info": false, 00:30:17.261 "zone_management": false, 00:30:17.261 "zone_append": false, 00:30:17.261 "compare": false, 00:30:17.261 "compare_and_write": false, 00:30:17.261 "abort": false, 00:30:17.261 "seek_hole": false, 00:30:17.261 "seek_data": false, 00:30:17.261 "copy": false, 00:30:17.261 "nvme_iov_md": false 00:30:17.261 }, 00:30:17.261 "driver_specific": { 00:30:17.261 "ftl": { 00:30:17.261 "base_bdev": "fe1a82bf-66fe-462f-99df-ee116b3aa015", 00:30:17.261 "cache": "nvc0n1p0" 00:30:17.261 } 00:30:17.261 } 00:30:17.261 } 00:30:17.261 ] 00:30:17.261 11:44:22 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:30:17.261 11:44:22 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:30:17.261 11:44:22 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:30:17.520 11:44:23 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:30:17.520 11:44:23 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:30:17.782 [2024-11-20 11:44:23.370551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.782 [2024-11-20 11:44:23.370623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:17.782 [2024-11-20 11:44:23.370646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:17.782 [2024-11-20 11:44:23.370662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.782 [2024-11-20 11:44:23.370713] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:17.782 [2024-11-20 11:44:23.374545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.782 [2024-11-20 11:44:23.374586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:17.782 [2024-11-20 11:44:23.374606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.793 ms 00:30:17.782 [2024-11-20 11:44:23.374619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.782 [2024-11-20 11:44:23.375142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.782 [2024-11-20 11:44:23.375170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:17.782 [2024-11-20 11:44:23.375202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.473 ms 00:30:17.782 [2024-11-20 11:44:23.375215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.782 [2024-11-20 11:44:23.378466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.782 [2024-11-20 11:44:23.378660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:17.782 [2024-11-20 11:44:23.378696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.217 ms 00:30:17.782 [2024-11-20 11:44:23.378711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.782 [2024-11-20 11:44:23.385116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.782 [2024-11-20 11:44:23.385153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:17.782 [2024-11-20 11:44:23.385197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.359 ms 00:30:17.782 [2024-11-20 11:44:23.385210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.782 [2024-11-20 11:44:23.416805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.782 [2024-11-20 11:44:23.416998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:17.782 [2024-11-20 11:44:23.417035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.486 ms 00:30:17.782 [2024-11-20 11:44:23.417050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.782 [2024-11-20 11:44:23.435875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.782 [2024-11-20 11:44:23.435922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:17.782 [2024-11-20 11:44:23.435961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.729 ms 00:30:17.782 [2024-11-20 11:44:23.435978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.782 [2024-11-20 11:44:23.436227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.782 [2024-11-20 11:44:23.436251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:17.782 [2024-11-20 11:44:23.436268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:30:17.782 [2024-11-20 11:44:23.436280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.782 [2024-11-20 11:44:23.467453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.782 [2024-11-20 11:44:23.467500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:17.782 [2024-11-20 11:44:23.467521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.136 ms 00:30:17.782 [2024-11-20 11:44:23.467558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.782 [2024-11-20 11:44:23.497929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.782 [2024-11-20 11:44:23.498107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:17.782 [2024-11-20 11:44:23.498144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.302 ms 00:30:17.782 [2024-11-20 11:44:23.498158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.782 [2024-11-20 11:44:23.528199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.782 [2024-11-20 11:44:23.528245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:17.782 [2024-11-20 11:44:23.528267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.972 ms 00:30:17.782 [2024-11-20 11:44:23.528280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.043 [2024-11-20 11:44:23.558490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:18.043 [2024-11-20 11:44:23.558686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:18.043 [2024-11-20 11:44:23.558723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.025 ms 00:30:18.043 [2024-11-20 11:44:23.558737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.043 [2024-11-20 11:44:23.558803] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:18.043 [2024-11-20 11:44:23.558828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:18.043 [2024-11-20 11:44:23.558846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:18.043 [2024-11-20 11:44:23.558860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:18.043 [2024-11-20 11:44:23.558880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:18.043 [2024-11-20 11:44:23.558893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:18.043 [2024-11-20 11:44:23.558909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:18.043 [2024-11-20 11:44:23.558922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:18.043 [2024-11-20 11:44:23.558940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:18.043 [2024-11-20 11:44:23.558953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:18.043 [2024-11-20 11:44:23.558968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:18.043 [2024-11-20 11:44:23.558982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:18.043 [2024-11-20 11:44:23.558997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:18.043 [2024-11-20 11:44:23.559010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:18.043 [2024-11-20 11:44:23.559024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:18.043 [2024-11-20 11:44:23.559037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:18.043 [2024-11-20 11:44:23.559052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:18.043 [2024-11-20 11:44:23.559065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:18.043 [2024-11-20 11:44:23.559080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:18.043 [2024-11-20 11:44:23.559093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:18.043 [2024-11-20 11:44:23.559111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:18.043 [2024-11-20 11:44:23.559124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:18.043 [2024-11-20 11:44:23.559139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:18.043 [2024-11-20 11:44:23.559152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:18.043 [2024-11-20 11:44:23.559170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:18.043 [2024-11-20 11:44:23.559183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:18.043 [2024-11-20 11:44:23.559198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:18.043 [2024-11-20 11:44:23.559211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:18.043 [2024-11-20 11:44:23.559227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:18.043 [2024-11-20 11:44:23.559240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:18.043 [2024-11-20 11:44:23.559255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:18.043 [2024-11-20 11:44:23.559268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:18.043 [2024-11-20 11:44:23.559283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:18.043 [2024-11-20 11:44:23.559296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:18.043 [2024-11-20 11:44:23.559311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:18.043 [2024-11-20 11:44:23.559332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:18.043 [2024-11-20 11:44:23.559348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.559993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.560006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.560022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.560035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.560049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.560062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.560100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.560113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.560129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.560142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.560160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.560173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.560188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.560201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.560216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.560229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.560244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.560257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.560272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.560285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.560302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.560320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.560340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:18.044 [2024-11-20 11:44:23.560362] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:18.045 [2024-11-20 11:44:23.560378] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a52af1f4-0851-4b4e-9ab5-2148c6f084f7 00:30:18.045 [2024-11-20 11:44:23.560390] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:18.045 [2024-11-20 11:44:23.560406] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:18.045 [2024-11-20 11:44:23.560418] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:18.045 [2024-11-20 11:44:23.560437] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:18.045 [2024-11-20 11:44:23.560449] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:18.045 [2024-11-20 11:44:23.560463] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:18.045 [2024-11-20 11:44:23.560475] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:18.045 [2024-11-20 11:44:23.560489] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:18.045 [2024-11-20 11:44:23.560499] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:18.045 [2024-11-20 11:44:23.560514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:18.045 [2024-11-20 11:44:23.560526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:18.045 [2024-11-20 11:44:23.560557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.715 ms 00:30:18.045 [2024-11-20 11:44:23.560570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.045 [2024-11-20 11:44:23.577846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:18.045 [2024-11-20 11:44:23.578041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:18.045 [2024-11-20 11:44:23.578086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.195 ms 00:30:18.045 [2024-11-20 11:44:23.578101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.045 [2024-11-20 11:44:23.578620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:18.045 [2024-11-20 11:44:23.578645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:18.045 [2024-11-20 11:44:23.578663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.471 ms 00:30:18.045 [2024-11-20 11:44:23.578675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.045 [2024-11-20 11:44:23.638762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:18.045 [2024-11-20 11:44:23.638834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:18.045 [2024-11-20 11:44:23.638875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:18.045 [2024-11-20 11:44:23.638889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.045 [2024-11-20 11:44:23.638997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:18.045 [2024-11-20 11:44:23.639014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:18.045 [2024-11-20 11:44:23.639030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:18.045 [2024-11-20 11:44:23.639042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.045 [2024-11-20 11:44:23.639213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:18.045 [2024-11-20 11:44:23.639236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:18.045 [2024-11-20 11:44:23.639256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:18.045 [2024-11-20 11:44:23.639269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.045 [2024-11-20 11:44:23.639316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:18.045 [2024-11-20 11:44:23.639331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:18.045 [2024-11-20 11:44:23.639346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:18.045 [2024-11-20 11:44:23.639358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.045 [2024-11-20 11:44:23.752484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:18.045 [2024-11-20 11:44:23.752609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:18.045 [2024-11-20 11:44:23.752636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:18.045 [2024-11-20 11:44:23.752649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.304 [2024-11-20 11:44:23.838958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:18.304 [2024-11-20 11:44:23.839031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:18.304 [2024-11-20 11:44:23.839075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:18.304 [2024-11-20 11:44:23.839094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.304 [2024-11-20 11:44:23.839255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:18.304 [2024-11-20 11:44:23.839276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:18.304 [2024-11-20 11:44:23.839297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:18.304 [2024-11-20 11:44:23.839317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.304 [2024-11-20 11:44:23.839416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:18.304 [2024-11-20 11:44:23.839435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:18.304 [2024-11-20 11:44:23.839451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:18.304 [2024-11-20 11:44:23.839463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.304 [2024-11-20 11:44:23.839645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:18.304 [2024-11-20 11:44:23.839668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:18.304 [2024-11-20 11:44:23.839685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:18.304 [2024-11-20 11:44:23.839697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.304 [2024-11-20 11:44:23.839787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:18.304 [2024-11-20 11:44:23.839806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:18.304 [2024-11-20 11:44:23.839822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:18.304 [2024-11-20 11:44:23.839835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.304 [2024-11-20 11:44:23.839895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:18.304 [2024-11-20 11:44:23.839911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:18.304 [2024-11-20 11:44:23.839926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:18.304 [2024-11-20 11:44:23.839938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.304 [2024-11-20 11:44:23.840016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:18.304 [2024-11-20 11:44:23.840034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:18.304 [2024-11-20 11:44:23.840050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:18.304 [2024-11-20 11:44:23.840063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:18.304 [2024-11-20 11:44:23.840282] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 469.726 ms, result 0 00:30:18.304 true 00:30:18.304 11:44:23 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 77000 00:30:18.304 11:44:23 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 77000 ']' 00:30:18.304 11:44:23 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 77000 00:30:18.304 11:44:23 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:30:18.304 11:44:23 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:18.304 11:44:23 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77000 00:30:18.304 killing process with pid 77000 00:30:18.304 11:44:23 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:18.304 11:44:23 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:18.304 11:44:23 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77000' 00:30:18.304 11:44:23 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 77000 00:30:18.304 11:44:23 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 77000 00:30:23.574 11:44:28 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:23.574 11:44:28 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:30:23.574 11:44:28 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:30:23.574 11:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:23.574 11:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:30:23.574 11:44:28 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:30:23.574 11:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:30:23.574 11:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:30:23.574 11:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:23.574 11:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:30:23.574 11:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:23.574 11:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:30:23.574 11:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:30:23.574 11:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:23.574 11:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:23.574 11:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:23.574 11:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:30:23.574 11:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:30:23.574 11:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:30:23.574 11:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:30:23.574 11:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:23.574 11:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:30:23.574 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:30:23.574 fio-3.35 00:30:23.574 Starting 1 thread 00:30:28.848 00:30:28.848 test: (groupid=0, jobs=1): err= 0: pid=77219: Wed Nov 20 11:44:34 2024 00:30:28.848 read: IOPS=911, BW=60.5MiB/s (63.5MB/s)(255MiB/4206msec) 00:30:28.848 slat (nsec): min=5978, max=58437, avg=7840.77, stdev=3156.80 00:30:28.848 clat (usec): min=357, max=855, avg=484.36, stdev=50.76 00:30:28.848 lat (usec): min=364, max=861, avg=492.20, stdev=51.35 00:30:28.848 clat percentiles (usec): 00:30:28.848 | 1.00th=[ 375], 5.00th=[ 416], 10.00th=[ 441], 20.00th=[ 449], 00:30:28.848 | 30.00th=[ 453], 40.00th=[ 461], 50.00th=[ 469], 60.00th=[ 482], 00:30:28.848 | 70.00th=[ 510], 80.00th=[ 529], 90.00th=[ 553], 95.00th=[ 578], 00:30:28.848 | 99.00th=[ 627], 99.50th=[ 652], 99.90th=[ 709], 99.95th=[ 766], 00:30:28.848 | 99.99th=[ 857] 00:30:28.848 write: IOPS=917, BW=60.9MiB/s (63.9MB/s)(256MiB/4202msec); 0 zone resets 00:30:28.848 slat (nsec): min=19377, max=79330, avg=24488.99, stdev=5106.07 00:30:28.848 clat (usec): min=385, max=2889, avg=562.91, stdev=77.18 00:30:28.848 lat (usec): min=407, max=2912, avg=587.40, stdev=77.68 00:30:28.848 clat percentiles (usec): 00:30:28.848 | 1.00th=[ 453], 5.00th=[ 478], 10.00th=[ 486], 20.00th=[ 506], 00:30:28.848 | 30.00th=[ 537], 40.00th=[ 545], 50.00th=[ 553], 60.00th=[ 562], 00:30:28.848 | 70.00th=[ 578], 80.00th=[ 611], 90.00th=[ 635], 95.00th=[ 660], 00:30:28.848 | 99.00th=[ 816], 99.50th=[ 840], 99.90th=[ 947], 99.95th=[ 1909], 00:30:28.848 | 99.99th=[ 2900] 00:30:28.848 bw ( KiB/s): min=61064, max=63920, per=99.90%, avg=62340.62, stdev=1131.20, samples=8 00:30:28.848 iops : min= 898, max= 940, avg=916.75, stdev=16.66, samples=8 00:30:28.848 lat (usec) : 500=42.57%, 750=56.44%, 1000=0.96% 00:30:28.848 lat (msec) : 2=0.01%, 4=0.01% 00:30:28.848 cpu : usr=99.02%, sys=0.21%, ctx=6, majf=0, minf=1169 00:30:28.848 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:28.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.848 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.848 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:28.848 00:30:28.848 Run status group 0 (all jobs): 00:30:28.848 READ: bw=60.5MiB/s (63.5MB/s), 60.5MiB/s-60.5MiB/s (63.5MB/s-63.5MB/s), io=255MiB (267MB), run=4206-4206msec 00:30:28.848 WRITE: bw=60.9MiB/s (63.9MB/s), 60.9MiB/s-60.9MiB/s (63.9MB/s-63.9MB/s), io=256MiB (269MB), run=4202-4202msec 00:30:30.751 ----------------------------------------------------- 00:30:30.751 Suppressions used: 00:30:30.751 count bytes template 00:30:30.751 1 5 /usr/src/fio/parse.c 00:30:30.751 1 8 libtcmalloc_minimal.so 00:30:30.751 1 904 libcrypto.so 00:30:30.751 ----------------------------------------------------- 00:30:30.751 00:30:30.751 11:44:36 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:30:30.751 11:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:30.751 11:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:30:30.751 11:44:36 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:30:30.751 11:44:36 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:30:30.751 11:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:30.751 11:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:30:30.751 11:44:36 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:30:30.751 11:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:30:30.751 11:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:30:30.751 11:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:30.751 11:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:30:30.751 11:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:30.751 11:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:30:30.751 11:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:30:30.751 11:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:30.751 11:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:30.751 11:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:30.751 11:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:30:30.751 11:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:30:30.751 11:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:30:30.751 11:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:30:30.751 11:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:30.751 11:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:30:31.010 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:30:31.010 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:30:31.010 fio-3.35 00:30:31.010 Starting 2 threads 00:31:03.095 00:31:03.095 first_half: (groupid=0, jobs=1): err= 0: pid=77329: Wed Nov 20 11:45:07 2024 00:31:03.095 read: IOPS=2178, BW=8714KiB/s (8924kB/s)(255MiB/29948msec) 00:31:03.095 slat (nsec): min=4321, max=85656, avg=8754.47, stdev=4184.17 00:31:03.095 clat (usec): min=550, max=332589, avg=44070.38, stdev=22147.97 00:31:03.095 lat (usec): min=564, max=332596, avg=44079.13, stdev=22148.48 00:31:03.095 clat percentiles (msec): 00:31:03.095 | 1.00th=[ 11], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 40], 00:31:03.095 | 30.00th=[ 40], 40.00th=[ 41], 50.00th=[ 42], 60.00th=[ 42], 00:31:03.095 | 70.00th=[ 43], 80.00th=[ 44], 90.00th=[ 48], 95.00th=[ 55], 00:31:03.095 | 99.00th=[ 163], 99.50th=[ 192], 99.90th=[ 262], 99.95th=[ 292], 00:31:03.095 | 99.99th=[ 326] 00:31:03.095 write: IOPS=2593, BW=10.1MiB/s (10.6MB/s)(256MiB/25274msec); 0 zone resets 00:31:03.095 slat (usec): min=5, max=599, avg=11.32, stdev= 7.42 00:31:03.095 clat (usec): min=473, max=121950, avg=14526.56, stdev=24487.16 00:31:03.095 lat (usec): min=490, max=121964, avg=14537.88, stdev=24487.91 00:31:03.095 clat percentiles (usec): 00:31:03.095 | 1.00th=[ 988], 5.00th=[ 1303], 10.00th=[ 1549], 20.00th=[ 2008], 00:31:03.095 | 30.00th=[ 3654], 40.00th=[ 5473], 50.00th=[ 6718], 60.00th=[ 7504], 00:31:03.095 | 70.00th=[ 8979], 80.00th=[ 13698], 90.00th=[ 41681], 95.00th=[ 88605], 00:31:03.095 | 99.00th=[104334], 99.50th=[106431], 99.90th=[114820], 99.95th=[115868], 00:31:03.095 | 99.99th=[121111] 00:31:03.095 bw ( KiB/s): min= 984, max=39408, per=90.26%, avg=18724.57, stdev=10444.39, samples=28 00:31:03.095 iops : min= 246, max= 9852, avg=4681.14, stdev=2611.10, samples=28 00:31:03.095 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.53% 00:31:03.095 lat (msec) : 2=9.48%, 4=6.36%, 10=20.73%, 20=8.93%, 50=46.78% 00:31:03.095 lat (msec) : 100=4.92%, 250=2.16%, 500=0.06% 00:31:03.095 cpu : usr=98.95%, sys=0.35%, ctx=70, majf=0, minf=5595 00:31:03.095 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:03.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.095 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:03.095 issued rwts: total=65245,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.095 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:03.095 second_half: (groupid=0, jobs=1): err= 0: pid=77330: Wed Nov 20 11:45:07 2024 00:31:03.095 read: IOPS=2190, BW=8760KiB/s (8970kB/s)(255MiB/29763msec) 00:31:03.095 slat (usec): min=4, max=273, avg= 7.76, stdev= 3.55 00:31:03.095 clat (usec): min=921, max=340981, avg=44792.88, stdev=20541.70 00:31:03.095 lat (usec): min=929, max=340990, avg=44800.64, stdev=20541.90 00:31:03.095 clat percentiles (msec): 00:31:03.095 | 1.00th=[ 10], 5.00th=[ 39], 10.00th=[ 39], 20.00th=[ 40], 00:31:03.095 | 30.00th=[ 40], 40.00th=[ 41], 50.00th=[ 42], 60.00th=[ 42], 00:31:03.095 | 70.00th=[ 43], 80.00th=[ 44], 90.00th=[ 49], 95.00th=[ 61], 00:31:03.095 | 99.00th=[ 161], 99.50th=[ 190], 99.90th=[ 220], 99.95th=[ 236], 00:31:03.095 | 99.99th=[ 334] 00:31:03.095 write: IOPS=2857, BW=11.2MiB/s (11.7MB/s)(256MiB/22934msec); 0 zone resets 00:31:03.095 slat (usec): min=5, max=1169, avg=10.19, stdev= 7.72 00:31:03.095 clat (usec): min=451, max=123034, avg=13550.55, stdev=24290.10 00:31:03.095 lat (usec): min=486, max=123042, avg=13560.74, stdev=24290.36 00:31:03.095 clat percentiles (usec): 00:31:03.095 | 1.00th=[ 1074], 5.00th=[ 1385], 10.00th=[ 1582], 20.00th=[ 1860], 00:31:03.095 | 30.00th=[ 2245], 40.00th=[ 3949], 50.00th=[ 5473], 60.00th=[ 6587], 00:31:03.095 | 70.00th=[ 8455], 80.00th=[ 13698], 90.00th=[ 26084], 95.00th=[ 88605], 00:31:03.095 | 99.00th=[103285], 99.50th=[107480], 99.90th=[115868], 99.95th=[116917], 00:31:03.095 | 99.99th=[122160] 00:31:03.095 bw ( KiB/s): min= 856, max=44552, per=100.00%, avg=21848.04, stdev=10931.42, samples=24 00:31:03.095 iops : min= 214, max=11138, avg=5462.00, stdev=2732.84, samples=24 00:31:03.095 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.25% 00:31:03.095 lat (msec) : 2=12.01%, 4=8.29%, 10=16.34%, 20=8.83%, 50=46.45% 00:31:03.095 lat (msec) : 100=5.62%, 250=2.15%, 500=0.01% 00:31:03.095 cpu : usr=98.52%, sys=0.38%, ctx=197, majf=0, minf=5524 00:31:03.095 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:03.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.095 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:03.095 issued rwts: total=65182,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.095 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:03.095 00:31:03.095 Run status group 0 (all jobs): 00:31:03.095 READ: bw=17.0MiB/s (17.8MB/s), 8714KiB/s-8760KiB/s (8924kB/s-8970kB/s), io=509MiB (534MB), run=29763-29948msec 00:31:03.095 WRITE: bw=20.3MiB/s (21.2MB/s), 10.1MiB/s-11.2MiB/s (10.6MB/s-11.7MB/s), io=512MiB (537MB), run=22934-25274msec 00:31:04.471 ----------------------------------------------------- 00:31:04.471 Suppressions used: 00:31:04.471 count bytes template 00:31:04.471 2 10 /usr/src/fio/parse.c 00:31:04.471 1 8 libtcmalloc_minimal.so 00:31:04.471 1 904 libcrypto.so 00:31:04.471 ----------------------------------------------------- 00:31:04.471 00:31:04.471 11:45:10 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:31:04.471 11:45:10 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:04.471 11:45:10 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:31:04.730 11:45:10 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:31:04.730 11:45:10 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:31:04.730 11:45:10 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:04.730 11:45:10 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:31:04.730 11:45:10 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:31:04.730 11:45:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:31:04.730 11:45:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:04.730 11:45:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:04.730 11:45:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:04.730 11:45:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:04.730 11:45:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:31:04.730 11:45:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:04.730 11:45:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:04.730 11:45:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:04.730 11:45:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:31:04.730 11:45:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:04.730 11:45:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:04.730 11:45:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:04.730 11:45:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:31:04.730 11:45:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:04.730 11:45:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:31:04.989 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:31:04.989 fio-3.35 00:31:04.989 Starting 1 thread 00:31:23.120 00:31:23.120 test: (groupid=0, jobs=1): err= 0: pid=77699: Wed Nov 20 11:45:28 2024 00:31:23.120 read: IOPS=6262, BW=24.5MiB/s (25.7MB/s)(255MiB/10411msec) 00:31:23.120 slat (nsec): min=4472, max=51327, avg=6948.82, stdev=2305.67 00:31:23.120 clat (usec): min=850, max=41489, avg=20426.55, stdev=1347.47 00:31:23.120 lat (usec): min=856, max=41510, avg=20433.50, stdev=1347.53 00:31:23.120 clat percentiles (usec): 00:31:23.120 | 1.00th=[19006], 5.00th=[19268], 10.00th=[19530], 20.00th=[19530], 00:31:23.120 | 30.00th=[19792], 40.00th=[19792], 50.00th=[20055], 60.00th=[20317], 00:31:23.120 | 70.00th=[20579], 80.00th=[21103], 90.00th=[21627], 95.00th=[22676], 00:31:23.120 | 99.00th=[25560], 99.50th=[25822], 99.90th=[30802], 99.95th=[35914], 00:31:23.120 | 99.99th=[40633] 00:31:23.120 write: IOPS=11.2k, BW=43.7MiB/s (45.8MB/s)(256MiB/5861msec); 0 zone resets 00:31:23.120 slat (usec): min=6, max=404, avg= 9.87, stdev= 5.28 00:31:23.120 clat (usec): min=711, max=68489, avg=11384.75, stdev=13863.71 00:31:23.120 lat (usec): min=719, max=68498, avg=11394.62, stdev=13863.67 00:31:23.120 clat percentiles (usec): 00:31:23.120 | 1.00th=[ 988], 5.00th=[ 1205], 10.00th=[ 1336], 20.00th=[ 1549], 00:31:23.120 | 30.00th=[ 1762], 40.00th=[ 2245], 50.00th=[ 7767], 60.00th=[ 9241], 00:31:23.120 | 70.00th=[10421], 80.00th=[12256], 90.00th=[39584], 95.00th=[42730], 00:31:23.120 | 99.00th=[49021], 99.50th=[50594], 99.90th=[53216], 99.95th=[57410], 00:31:23.120 | 99.99th=[64750] 00:31:23.120 bw ( KiB/s): min=26144, max=59608, per=97.68%, avg=43690.67, stdev=8803.97, samples=12 00:31:23.120 iops : min= 6536, max=14902, avg=10922.67, stdev=2200.99, samples=12 00:31:23.120 lat (usec) : 750=0.01%, 1000=0.56% 00:31:23.120 lat (msec) : 2=18.10%, 4=2.22%, 10=12.48%, 20=32.49%, 50=33.80% 00:31:23.120 lat (msec) : 100=0.33% 00:31:23.120 cpu : usr=98.84%, sys=0.30%, ctx=25, majf=0, minf=5565 00:31:23.120 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:23.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.120 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:23.120 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.120 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:23.120 00:31:23.120 Run status group 0 (all jobs): 00:31:23.120 READ: bw=24.5MiB/s (25.7MB/s), 24.5MiB/s-24.5MiB/s (25.7MB/s-25.7MB/s), io=255MiB (267MB), run=10411-10411msec 00:31:23.120 WRITE: bw=43.7MiB/s (45.8MB/s), 43.7MiB/s-43.7MiB/s (45.8MB/s-45.8MB/s), io=256MiB (268MB), run=5861-5861msec 00:31:24.497 ----------------------------------------------------- 00:31:24.497 Suppressions used: 00:31:24.497 count bytes template 00:31:24.497 1 5 /usr/src/fio/parse.c 00:31:24.497 2 192 /usr/src/fio/iolog.c 00:31:24.497 1 8 libtcmalloc_minimal.so 00:31:24.497 1 904 libcrypto.so 00:31:24.497 ----------------------------------------------------- 00:31:24.497 00:31:24.497 11:45:29 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:31:24.497 11:45:29 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:24.497 11:45:29 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:31:24.497 11:45:29 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:24.497 Remove shared memory files 00:31:24.497 11:45:29 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:31:24.497 11:45:29 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:24.497 11:45:29 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:31:24.497 11:45:29 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:31:24.497 11:45:29 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58011 /dev/shm/spdk_tgt_trace.pid75918 00:31:24.497 11:45:29 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:24.497 11:45:29 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:31:24.497 ************************************ 00:31:24.497 END TEST ftl_fio_basic 00:31:24.497 ************************************ 00:31:24.497 00:31:24.497 real 1m16.353s 00:31:24.497 user 2m50.095s 00:31:24.497 sys 0m4.467s 00:31:24.497 11:45:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:24.497 11:45:29 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:31:24.497 11:45:29 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:31:24.497 11:45:29 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:24.497 11:45:29 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:24.497 11:45:29 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:24.497 ************************************ 00:31:24.497 START TEST ftl_bdevperf 00:31:24.497 ************************************ 00:31:24.497 11:45:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:31:24.497 * Looking for test storage... 00:31:24.497 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:24.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.497 --rc genhtml_branch_coverage=1 00:31:24.497 --rc genhtml_function_coverage=1 00:31:24.497 --rc genhtml_legend=1 00:31:24.497 --rc geninfo_all_blocks=1 00:31:24.497 --rc geninfo_unexecuted_blocks=1 00:31:24.497 00:31:24.497 ' 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:24.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.497 --rc genhtml_branch_coverage=1 00:31:24.497 --rc genhtml_function_coverage=1 00:31:24.497 --rc genhtml_legend=1 00:31:24.497 --rc geninfo_all_blocks=1 00:31:24.497 --rc geninfo_unexecuted_blocks=1 00:31:24.497 00:31:24.497 ' 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:24.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.497 --rc genhtml_branch_coverage=1 00:31:24.497 --rc genhtml_function_coverage=1 00:31:24.497 --rc genhtml_legend=1 00:31:24.497 --rc geninfo_all_blocks=1 00:31:24.497 --rc geninfo_unexecuted_blocks=1 00:31:24.497 00:31:24.497 ' 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:24.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.497 --rc genhtml_branch_coverage=1 00:31:24.497 --rc genhtml_function_coverage=1 00:31:24.497 --rc genhtml_legend=1 00:31:24.497 --rc geninfo_all_blocks=1 00:31:24.497 --rc geninfo_unexecuted_blocks=1 00:31:24.497 00:31:24.497 ' 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:24.497 11:45:30 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:24.498 11:45:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:31:24.498 11:45:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:31:24.498 11:45:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:31:24.498 11:45:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:24.498 11:45:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:31:24.498 11:45:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=77962 00:31:24.498 11:45:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:31:24.498 11:45:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:31:24.498 11:45:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 77962 00:31:24.498 11:45:30 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 77962 ']' 00:31:24.498 11:45:30 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:24.498 11:45:30 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:24.498 11:45:30 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:24.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:24.498 11:45:30 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:24.498 11:45:30 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:24.756 [2024-11-20 11:45:30.314146] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:31:24.756 [2024-11-20 11:45:30.314310] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77962 ] 00:31:24.756 [2024-11-20 11:45:30.492407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:25.014 [2024-11-20 11:45:30.654142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:25.581 11:45:31 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:25.581 11:45:31 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:31:25.581 11:45:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:31:25.581 11:45:31 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:31:25.581 11:45:31 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:25.581 11:45:31 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:31:25.581 11:45:31 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:31:25.581 11:45:31 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:25.839 11:45:31 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:31:25.839 11:45:31 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:31:25.839 11:45:31 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:31:25.839 11:45:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:31:25.839 11:45:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:25.839 11:45:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:31:25.839 11:45:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:31:25.839 11:45:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:31:26.406 11:45:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:26.406 { 00:31:26.406 "name": "nvme0n1", 00:31:26.406 "aliases": [ 00:31:26.406 "8aa58da7-87bd-45ed-897f-c364c74dd34f" 00:31:26.406 ], 00:31:26.406 "product_name": "NVMe disk", 00:31:26.406 "block_size": 4096, 00:31:26.406 "num_blocks": 1310720, 00:31:26.406 "uuid": "8aa58da7-87bd-45ed-897f-c364c74dd34f", 00:31:26.406 "numa_id": -1, 00:31:26.406 "assigned_rate_limits": { 00:31:26.406 "rw_ios_per_sec": 0, 00:31:26.406 "rw_mbytes_per_sec": 0, 00:31:26.406 "r_mbytes_per_sec": 0, 00:31:26.406 "w_mbytes_per_sec": 0 00:31:26.406 }, 00:31:26.406 "claimed": true, 00:31:26.406 "claim_type": "read_many_write_one", 00:31:26.406 "zoned": false, 00:31:26.406 "supported_io_types": { 00:31:26.406 "read": true, 00:31:26.406 "write": true, 00:31:26.406 "unmap": true, 00:31:26.406 "flush": true, 00:31:26.406 "reset": true, 00:31:26.406 "nvme_admin": true, 00:31:26.406 "nvme_io": true, 00:31:26.406 "nvme_io_md": false, 00:31:26.406 "write_zeroes": true, 00:31:26.406 "zcopy": false, 00:31:26.406 "get_zone_info": false, 00:31:26.406 "zone_management": false, 00:31:26.406 "zone_append": false, 00:31:26.406 "compare": true, 00:31:26.406 "compare_and_write": false, 00:31:26.406 "abort": true, 00:31:26.406 "seek_hole": false, 00:31:26.406 "seek_data": false, 00:31:26.406 "copy": true, 00:31:26.406 "nvme_iov_md": false 00:31:26.406 }, 00:31:26.406 "driver_specific": { 00:31:26.406 "nvme": [ 00:31:26.406 { 00:31:26.406 "pci_address": "0000:00:11.0", 00:31:26.406 "trid": { 00:31:26.406 "trtype": "PCIe", 00:31:26.406 "traddr": "0000:00:11.0" 00:31:26.406 }, 00:31:26.406 "ctrlr_data": { 00:31:26.406 "cntlid": 0, 00:31:26.406 "vendor_id": "0x1b36", 00:31:26.406 "model_number": "QEMU NVMe Ctrl", 00:31:26.406 "serial_number": "12341", 00:31:26.406 "firmware_revision": "8.0.0", 00:31:26.406 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:26.406 "oacs": { 00:31:26.406 "security": 0, 00:31:26.406 "format": 1, 00:31:26.406 "firmware": 0, 00:31:26.406 "ns_manage": 1 00:31:26.406 }, 00:31:26.406 "multi_ctrlr": false, 00:31:26.406 "ana_reporting": false 00:31:26.406 }, 00:31:26.406 "vs": { 00:31:26.406 "nvme_version": "1.4" 00:31:26.406 }, 00:31:26.406 "ns_data": { 00:31:26.406 "id": 1, 00:31:26.406 "can_share": false 00:31:26.406 } 00:31:26.406 } 00:31:26.406 ], 00:31:26.406 "mp_policy": "active_passive" 00:31:26.406 } 00:31:26.406 } 00:31:26.406 ]' 00:31:26.406 11:45:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:26.406 11:45:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:31:26.406 11:45:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:26.406 11:45:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:31:26.406 11:45:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:31:26.406 11:45:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:31:26.406 11:45:31 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:31:26.406 11:45:31 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:31:26.406 11:45:31 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:31:26.406 11:45:31 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:26.406 11:45:31 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:26.665 11:45:32 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=4382d387-958d-420f-b239-c0c3ac9f5778 00:31:26.665 11:45:32 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:31:26.665 11:45:32 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4382d387-958d-420f-b239-c0c3ac9f5778 00:31:26.923 11:45:32 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:31:27.181 11:45:32 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=c60f5b4a-20ef-4a4c-ae8f-a293273190f8 00:31:27.181 11:45:32 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c60f5b4a-20ef-4a4c-ae8f-a293273190f8 00:31:27.506 11:45:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=5e5ac780-dd7b-42a2-8e0b-4609bac3386b 00:31:27.506 11:45:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 5e5ac780-dd7b-42a2-8e0b-4609bac3386b 00:31:27.506 11:45:33 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:31:27.506 11:45:33 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:27.506 11:45:33 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=5e5ac780-dd7b-42a2-8e0b-4609bac3386b 00:31:27.506 11:45:33 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:31:27.506 11:45:33 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 5e5ac780-dd7b-42a2-8e0b-4609bac3386b 00:31:27.506 11:45:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=5e5ac780-dd7b-42a2-8e0b-4609bac3386b 00:31:27.507 11:45:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:27.507 11:45:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:31:27.507 11:45:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:31:27.507 11:45:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5e5ac780-dd7b-42a2-8e0b-4609bac3386b 00:31:27.766 11:45:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:27.766 { 00:31:27.766 "name": "5e5ac780-dd7b-42a2-8e0b-4609bac3386b", 00:31:27.766 "aliases": [ 00:31:27.766 "lvs/nvme0n1p0" 00:31:27.766 ], 00:31:27.766 "product_name": "Logical Volume", 00:31:27.766 "block_size": 4096, 00:31:27.766 "num_blocks": 26476544, 00:31:27.766 "uuid": "5e5ac780-dd7b-42a2-8e0b-4609bac3386b", 00:31:27.766 "assigned_rate_limits": { 00:31:27.766 "rw_ios_per_sec": 0, 00:31:27.766 "rw_mbytes_per_sec": 0, 00:31:27.766 "r_mbytes_per_sec": 0, 00:31:27.766 "w_mbytes_per_sec": 0 00:31:27.766 }, 00:31:27.766 "claimed": false, 00:31:27.766 "zoned": false, 00:31:27.766 "supported_io_types": { 00:31:27.766 "read": true, 00:31:27.766 "write": true, 00:31:27.766 "unmap": true, 00:31:27.766 "flush": false, 00:31:27.766 "reset": true, 00:31:27.766 "nvme_admin": false, 00:31:27.766 "nvme_io": false, 00:31:27.766 "nvme_io_md": false, 00:31:27.766 "write_zeroes": true, 00:31:27.766 "zcopy": false, 00:31:27.766 "get_zone_info": false, 00:31:27.766 "zone_management": false, 00:31:27.766 "zone_append": false, 00:31:27.766 "compare": false, 00:31:27.766 "compare_and_write": false, 00:31:27.766 "abort": false, 00:31:27.766 "seek_hole": true, 00:31:27.766 "seek_data": true, 00:31:27.766 "copy": false, 00:31:27.766 "nvme_iov_md": false 00:31:27.766 }, 00:31:27.766 "driver_specific": { 00:31:27.766 "lvol": { 00:31:27.766 "lvol_store_uuid": "c60f5b4a-20ef-4a4c-ae8f-a293273190f8", 00:31:27.766 "base_bdev": "nvme0n1", 00:31:27.766 "thin_provision": true, 00:31:27.766 "num_allocated_clusters": 0, 00:31:27.766 "snapshot": false, 00:31:27.766 "clone": false, 00:31:27.766 "esnap_clone": false 00:31:27.766 } 00:31:27.766 } 00:31:27.766 } 00:31:27.766 ]' 00:31:27.766 11:45:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:27.766 11:45:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:31:27.766 11:45:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:28.024 11:45:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:28.024 11:45:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:28.024 11:45:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:31:28.024 11:45:33 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:31:28.024 11:45:33 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:31:28.024 11:45:33 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:31:28.283 11:45:33 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:31:28.283 11:45:33 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:31:28.283 11:45:33 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 5e5ac780-dd7b-42a2-8e0b-4609bac3386b 00:31:28.283 11:45:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=5e5ac780-dd7b-42a2-8e0b-4609bac3386b 00:31:28.283 11:45:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:28.283 11:45:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:31:28.283 11:45:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:31:28.283 11:45:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5e5ac780-dd7b-42a2-8e0b-4609bac3386b 00:31:28.542 11:45:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:28.542 { 00:31:28.542 "name": "5e5ac780-dd7b-42a2-8e0b-4609bac3386b", 00:31:28.542 "aliases": [ 00:31:28.542 "lvs/nvme0n1p0" 00:31:28.542 ], 00:31:28.542 "product_name": "Logical Volume", 00:31:28.542 "block_size": 4096, 00:31:28.542 "num_blocks": 26476544, 00:31:28.542 "uuid": "5e5ac780-dd7b-42a2-8e0b-4609bac3386b", 00:31:28.542 "assigned_rate_limits": { 00:31:28.542 "rw_ios_per_sec": 0, 00:31:28.542 "rw_mbytes_per_sec": 0, 00:31:28.542 "r_mbytes_per_sec": 0, 00:31:28.542 "w_mbytes_per_sec": 0 00:31:28.542 }, 00:31:28.542 "claimed": false, 00:31:28.542 "zoned": false, 00:31:28.542 "supported_io_types": { 00:31:28.542 "read": true, 00:31:28.542 "write": true, 00:31:28.542 "unmap": true, 00:31:28.542 "flush": false, 00:31:28.542 "reset": true, 00:31:28.542 "nvme_admin": false, 00:31:28.542 "nvme_io": false, 00:31:28.542 "nvme_io_md": false, 00:31:28.542 "write_zeroes": true, 00:31:28.542 "zcopy": false, 00:31:28.542 "get_zone_info": false, 00:31:28.542 "zone_management": false, 00:31:28.542 "zone_append": false, 00:31:28.542 "compare": false, 00:31:28.542 "compare_and_write": false, 00:31:28.542 "abort": false, 00:31:28.542 "seek_hole": true, 00:31:28.542 "seek_data": true, 00:31:28.542 "copy": false, 00:31:28.542 "nvme_iov_md": false 00:31:28.542 }, 00:31:28.542 "driver_specific": { 00:31:28.542 "lvol": { 00:31:28.542 "lvol_store_uuid": "c60f5b4a-20ef-4a4c-ae8f-a293273190f8", 00:31:28.542 "base_bdev": "nvme0n1", 00:31:28.542 "thin_provision": true, 00:31:28.542 "num_allocated_clusters": 0, 00:31:28.542 "snapshot": false, 00:31:28.542 "clone": false, 00:31:28.542 "esnap_clone": false 00:31:28.542 } 00:31:28.542 } 00:31:28.542 } 00:31:28.542 ]' 00:31:28.542 11:45:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:28.542 11:45:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:31:28.542 11:45:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:28.542 11:45:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:28.542 11:45:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:28.542 11:45:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:31:28.542 11:45:34 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:31:28.542 11:45:34 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:31:28.800 11:45:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:31:28.800 11:45:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 5e5ac780-dd7b-42a2-8e0b-4609bac3386b 00:31:28.800 11:45:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=5e5ac780-dd7b-42a2-8e0b-4609bac3386b 00:31:28.800 11:45:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:28.800 11:45:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:31:28.800 11:45:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:31:28.800 11:45:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5e5ac780-dd7b-42a2-8e0b-4609bac3386b 00:31:29.059 11:45:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:29.059 { 00:31:29.059 "name": "5e5ac780-dd7b-42a2-8e0b-4609bac3386b", 00:31:29.059 "aliases": [ 00:31:29.059 "lvs/nvme0n1p0" 00:31:29.059 ], 00:31:29.059 "product_name": "Logical Volume", 00:31:29.059 "block_size": 4096, 00:31:29.059 "num_blocks": 26476544, 00:31:29.059 "uuid": "5e5ac780-dd7b-42a2-8e0b-4609bac3386b", 00:31:29.059 "assigned_rate_limits": { 00:31:29.059 "rw_ios_per_sec": 0, 00:31:29.059 "rw_mbytes_per_sec": 0, 00:31:29.059 "r_mbytes_per_sec": 0, 00:31:29.059 "w_mbytes_per_sec": 0 00:31:29.059 }, 00:31:29.059 "claimed": false, 00:31:29.059 "zoned": false, 00:31:29.059 "supported_io_types": { 00:31:29.059 "read": true, 00:31:29.059 "write": true, 00:31:29.059 "unmap": true, 00:31:29.059 "flush": false, 00:31:29.059 "reset": true, 00:31:29.059 "nvme_admin": false, 00:31:29.059 "nvme_io": false, 00:31:29.059 "nvme_io_md": false, 00:31:29.059 "write_zeroes": true, 00:31:29.059 "zcopy": false, 00:31:29.059 "get_zone_info": false, 00:31:29.059 "zone_management": false, 00:31:29.059 "zone_append": false, 00:31:29.059 "compare": false, 00:31:29.059 "compare_and_write": false, 00:31:29.059 "abort": false, 00:31:29.059 "seek_hole": true, 00:31:29.059 "seek_data": true, 00:31:29.059 "copy": false, 00:31:29.059 "nvme_iov_md": false 00:31:29.059 }, 00:31:29.059 "driver_specific": { 00:31:29.059 "lvol": { 00:31:29.059 "lvol_store_uuid": "c60f5b4a-20ef-4a4c-ae8f-a293273190f8", 00:31:29.059 "base_bdev": "nvme0n1", 00:31:29.059 "thin_provision": true, 00:31:29.059 "num_allocated_clusters": 0, 00:31:29.059 "snapshot": false, 00:31:29.059 "clone": false, 00:31:29.059 "esnap_clone": false 00:31:29.059 } 00:31:29.059 } 00:31:29.059 } 00:31:29.059 ]' 00:31:29.059 11:45:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:29.059 11:45:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:31:29.059 11:45:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:29.317 11:45:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:29.317 11:45:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:29.317 11:45:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:31:29.317 11:45:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:31:29.317 11:45:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 5e5ac780-dd7b-42a2-8e0b-4609bac3386b -c nvc0n1p0 --l2p_dram_limit 20 00:31:29.577 [2024-11-20 11:45:35.123456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.577 [2024-11-20 11:45:35.123538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:29.577 [2024-11-20 11:45:35.123595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:29.577 [2024-11-20 11:45:35.123613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.577 [2024-11-20 11:45:35.123735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.577 [2024-11-20 11:45:35.123774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:29.577 [2024-11-20 11:45:35.123788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:31:29.577 [2024-11-20 11:45:35.123801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.577 [2024-11-20 11:45:35.123833] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:29.577 [2024-11-20 11:45:35.124887] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:29.577 [2024-11-20 11:45:35.124919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.577 [2024-11-20 11:45:35.124936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:29.577 [2024-11-20 11:45:35.124949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.094 ms 00:31:29.577 [2024-11-20 11:45:35.124962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.577 [2024-11-20 11:45:35.125092] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5662df62-9eb9-479e-b18d-99c49faa47ad 00:31:29.577 [2024-11-20 11:45:35.127140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.577 [2024-11-20 11:45:35.127181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:31:29.577 [2024-11-20 11:45:35.127216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:31:29.577 [2024-11-20 11:45:35.127231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.577 [2024-11-20 11:45:35.137569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.577 [2024-11-20 11:45:35.137628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:29.577 [2024-11-20 11:45:35.137678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.245 ms 00:31:29.577 [2024-11-20 11:45:35.137703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.577 [2024-11-20 11:45:35.137820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.577 [2024-11-20 11:45:35.137839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:29.577 [2024-11-20 11:45:35.137859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:31:29.577 [2024-11-20 11:45:35.137870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.577 [2024-11-20 11:45:35.137939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.577 [2024-11-20 11:45:35.137957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:29.577 [2024-11-20 11:45:35.137971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:29.577 [2024-11-20 11:45:35.137982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.577 [2024-11-20 11:45:35.138013] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:29.577 [2024-11-20 11:45:35.143465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.577 [2024-11-20 11:45:35.144348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:29.577 [2024-11-20 11:45:35.144374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.464 ms 00:31:29.577 [2024-11-20 11:45:35.144392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.577 [2024-11-20 11:45:35.144440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.577 [2024-11-20 11:45:35.144460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:29.577 [2024-11-20 11:45:35.144473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:29.577 [2024-11-20 11:45:35.144486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.577 [2024-11-20 11:45:35.144530] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:31:29.577 [2024-11-20 11:45:35.144751] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:29.577 [2024-11-20 11:45:35.144771] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:29.577 [2024-11-20 11:45:35.144788] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:29.577 [2024-11-20 11:45:35.144802] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:29.577 [2024-11-20 11:45:35.144816] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:29.577 [2024-11-20 11:45:35.144828] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:29.577 [2024-11-20 11:45:35.144840] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:29.577 [2024-11-20 11:45:35.144849] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:29.577 [2024-11-20 11:45:35.144862] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:29.577 [2024-11-20 11:45:35.144873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.577 [2024-11-20 11:45:35.144889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:29.577 [2024-11-20 11:45:35.144900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:31:29.577 [2024-11-20 11:45:35.144912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.577 [2024-11-20 11:45:35.145000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.577 [2024-11-20 11:45:35.145021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:29.577 [2024-11-20 11:45:35.145033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:31:29.577 [2024-11-20 11:45:35.145048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.577 [2024-11-20 11:45:35.145141] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:29.577 [2024-11-20 11:45:35.145158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:29.577 [2024-11-20 11:45:35.145172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:29.577 [2024-11-20 11:45:35.145185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:29.577 [2024-11-20 11:45:35.145196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:29.577 [2024-11-20 11:45:35.145251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:29.577 [2024-11-20 11:45:35.145264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:29.577 [2024-11-20 11:45:35.145278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:29.577 [2024-11-20 11:45:35.145289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:29.577 [2024-11-20 11:45:35.145301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:29.577 [2024-11-20 11:45:35.145312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:29.577 [2024-11-20 11:45:35.145337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:29.577 [2024-11-20 11:45:35.145349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:29.577 [2024-11-20 11:45:35.145378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:29.577 [2024-11-20 11:45:35.145390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:29.577 [2024-11-20 11:45:35.145405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:29.577 [2024-11-20 11:45:35.145416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:29.577 [2024-11-20 11:45:35.145429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:29.577 [2024-11-20 11:45:35.145440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:29.577 [2024-11-20 11:45:35.145463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:29.577 [2024-11-20 11:45:35.145476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:29.577 [2024-11-20 11:45:35.145489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:29.577 [2024-11-20 11:45:35.145513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:29.577 [2024-11-20 11:45:35.145540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:29.577 [2024-11-20 11:45:35.145550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:29.577 [2024-11-20 11:45:35.145601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:29.577 [2024-11-20 11:45:35.145617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:29.577 [2024-11-20 11:45:35.145630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:29.577 [2024-11-20 11:45:35.145654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:29.577 [2024-11-20 11:45:35.145694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:29.577 [2024-11-20 11:45:35.145704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:29.577 [2024-11-20 11:45:35.145718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:29.577 [2024-11-20 11:45:35.145728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:29.577 [2024-11-20 11:45:35.145739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:29.577 [2024-11-20 11:45:35.145749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:29.577 [2024-11-20 11:45:35.145761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:29.578 [2024-11-20 11:45:35.145770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:29.578 [2024-11-20 11:45:35.145782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:29.578 [2024-11-20 11:45:35.145792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:29.578 [2024-11-20 11:45:35.145803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:29.578 [2024-11-20 11:45:35.145813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:29.578 [2024-11-20 11:45:35.145825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:29.578 [2024-11-20 11:45:35.145834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:29.578 [2024-11-20 11:45:35.145850] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:29.578 [2024-11-20 11:45:35.145860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:29.578 [2024-11-20 11:45:35.145873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:29.578 [2024-11-20 11:45:35.145883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:29.578 [2024-11-20 11:45:35.145900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:29.578 [2024-11-20 11:45:35.145911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:29.578 [2024-11-20 11:45:35.145923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:29.578 [2024-11-20 11:45:35.145933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:29.578 [2024-11-20 11:45:35.145967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:29.578 [2024-11-20 11:45:35.145977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:29.578 [2024-11-20 11:45:35.145993] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:29.578 [2024-11-20 11:45:35.146006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:29.578 [2024-11-20 11:45:35.146020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:29.578 [2024-11-20 11:45:35.146031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:29.578 [2024-11-20 11:45:35.146043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:29.578 [2024-11-20 11:45:35.146054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:29.578 [2024-11-20 11:45:35.146066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:29.578 [2024-11-20 11:45:35.146076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:29.578 [2024-11-20 11:45:35.146088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:29.578 [2024-11-20 11:45:35.146098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:29.578 [2024-11-20 11:45:35.146112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:29.578 [2024-11-20 11:45:35.146123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:29.578 [2024-11-20 11:45:35.146135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:29.578 [2024-11-20 11:45:35.146145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:29.578 [2024-11-20 11:45:35.146158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:29.578 [2024-11-20 11:45:35.146168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:29.578 [2024-11-20 11:45:35.146180] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:29.578 [2024-11-20 11:45:35.146192] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:29.578 [2024-11-20 11:45:35.146207] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:29.578 [2024-11-20 11:45:35.146218] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:29.578 [2024-11-20 11:45:35.146238] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:29.578 [2024-11-20 11:45:35.146251] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:29.578 [2024-11-20 11:45:35.146268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.578 [2024-11-20 11:45:35.146284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:29.578 [2024-11-20 11:45:35.146316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.181 ms 00:31:29.578 [2024-11-20 11:45:35.146327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.578 [2024-11-20 11:45:35.146378] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:31:29.578 [2024-11-20 11:45:35.146394] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:31:32.865 [2024-11-20 11:45:38.012681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.865 [2024-11-20 11:45:38.013018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:31:32.865 [2024-11-20 11:45:38.013061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2866.289 ms 00:31:32.865 [2024-11-20 11:45:38.013075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.865 [2024-11-20 11:45:38.050235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.865 [2024-11-20 11:45:38.050507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:32.865 [2024-11-20 11:45:38.050580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.864 ms 00:31:32.865 [2024-11-20 11:45:38.050595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.865 [2024-11-20 11:45:38.050786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.865 [2024-11-20 11:45:38.050806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:32.865 [2024-11-20 11:45:38.050826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:31:32.865 [2024-11-20 11:45:38.050837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.865 [2024-11-20 11:45:38.101295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.865 [2024-11-20 11:45:38.101352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:32.865 [2024-11-20 11:45:38.101376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.401 ms 00:31:32.865 [2024-11-20 11:45:38.101389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.865 [2024-11-20 11:45:38.101439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.865 [2024-11-20 11:45:38.101458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:32.865 [2024-11-20 11:45:38.101473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:32.865 [2024-11-20 11:45:38.101484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.865 [2024-11-20 11:45:38.102241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.865 [2024-11-20 11:45:38.102277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:32.865 [2024-11-20 11:45:38.102312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.604 ms 00:31:32.865 [2024-11-20 11:45:38.102340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.865 [2024-11-20 11:45:38.102531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.866 [2024-11-20 11:45:38.102564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:32.866 [2024-11-20 11:45:38.102597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:31:32.866 [2024-11-20 11:45:38.102609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.866 [2024-11-20 11:45:38.121447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.866 [2024-11-20 11:45:38.121676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:32.866 [2024-11-20 11:45:38.121817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.628 ms 00:31:32.866 [2024-11-20 11:45:38.121842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.866 [2024-11-20 11:45:38.136919] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:31:32.866 [2024-11-20 11:45:38.145096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.866 [2024-11-20 11:45:38.145151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:32.866 [2024-11-20 11:45:38.145168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.124 ms 00:31:32.866 [2024-11-20 11:45:38.145182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.866 [2024-11-20 11:45:38.220553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.866 [2024-11-20 11:45:38.220948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:31:32.866 [2024-11-20 11:45:38.220991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.305 ms 00:31:32.866 [2024-11-20 11:45:38.221008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.866 [2024-11-20 11:45:38.221284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.866 [2024-11-20 11:45:38.221313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:32.866 [2024-11-20 11:45:38.221327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:31:32.866 [2024-11-20 11:45:38.221341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.866 [2024-11-20 11:45:38.250278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.866 [2024-11-20 11:45:38.250325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:31:32.866 [2024-11-20 11:45:38.250359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.870 ms 00:31:32.866 [2024-11-20 11:45:38.250372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.866 [2024-11-20 11:45:38.278416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.866 [2024-11-20 11:45:38.278480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:31:32.866 [2024-11-20 11:45:38.278498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.002 ms 00:31:32.866 [2024-11-20 11:45:38.278511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.866 [2024-11-20 11:45:38.279440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.866 [2024-11-20 11:45:38.279477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:32.866 [2024-11-20 11:45:38.279492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.856 ms 00:31:32.866 [2024-11-20 11:45:38.279506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.866 [2024-11-20 11:45:38.360505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.866 [2024-11-20 11:45:38.360628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:31:32.866 [2024-11-20 11:45:38.360649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.910 ms 00:31:32.866 [2024-11-20 11:45:38.360680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.866 [2024-11-20 11:45:38.391008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.866 [2024-11-20 11:45:38.391071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:31:32.866 [2024-11-20 11:45:38.391089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.221 ms 00:31:32.866 [2024-11-20 11:45:38.391111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.866 [2024-11-20 11:45:38.421093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.866 [2024-11-20 11:45:38.421157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:31:32.866 [2024-11-20 11:45:38.421174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.941 ms 00:31:32.866 [2024-11-20 11:45:38.421187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.866 [2024-11-20 11:45:38.453072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.866 [2024-11-20 11:45:38.453135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:32.866 [2024-11-20 11:45:38.453154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.819 ms 00:31:32.866 [2024-11-20 11:45:38.453168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.866 [2024-11-20 11:45:38.453241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.866 [2024-11-20 11:45:38.453268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:32.866 [2024-11-20 11:45:38.453287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:32.866 [2024-11-20 11:45:38.453302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.866 [2024-11-20 11:45:38.453426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.866 [2024-11-20 11:45:38.453449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:32.866 [2024-11-20 11:45:38.453462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:31:32.866 [2024-11-20 11:45:38.453476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.866 [2024-11-20 11:45:38.454772] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3330.805 ms, result 0 00:31:32.866 { 00:31:32.866 "name": "ftl0", 00:31:32.866 "uuid": "5662df62-9eb9-479e-b18d-99c49faa47ad" 00:31:32.866 } 00:31:32.866 11:45:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:31:32.866 11:45:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:31:32.866 11:45:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:31:33.125 11:45:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:31:33.383 [2024-11-20 11:45:38.947140] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:31:33.383 I/O size of 69632 is greater than zero copy threshold (65536). 00:31:33.383 Zero copy mechanism will not be used. 00:31:33.383 Running I/O for 4 seconds... 00:31:35.256 1615.00 IOPS, 107.25 MiB/s [2024-11-20T11:45:42.429Z] 1634.00 IOPS, 108.51 MiB/s [2024-11-20T11:45:43.017Z] 1654.00 IOPS, 109.84 MiB/s [2024-11-20T11:45:43.017Z] 1669.25 IOPS, 110.85 MiB/s 00:31:37.251 Latency(us) 00:31:37.251 [2024-11-20T11:45:43.017Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:37.251 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:31:37.251 ftl0 : 4.00 1668.63 110.81 0.00 0.00 630.34 260.65 2412.92 00:31:37.251 [2024-11-20T11:45:43.017Z] =================================================================================================================== 00:31:37.251 [2024-11-20T11:45:43.017Z] Total : 1668.63 110.81 0.00 0.00 630.34 260.65 2412.92 00:31:37.251 { 00:31:37.251 "results": [ 00:31:37.251 { 00:31:37.251 "job": "ftl0", 00:31:37.251 "core_mask": "0x1", 00:31:37.251 "workload": "randwrite", 00:31:37.252 "status": "finished", 00:31:37.252 "queue_depth": 1, 00:31:37.252 "io_size": 69632, 00:31:37.252 "runtime": 4.002081, 00:31:37.252 "iops": 1668.6318942570128, 00:31:37.252 "mibps": 110.80758672800476, 00:31:37.252 "io_failed": 0, 00:31:37.252 "io_timeout": 0, 00:31:37.252 "avg_latency_us": 630.3439440224346, 00:31:37.252 "min_latency_us": 260.6545454545454, 00:31:37.252 "max_latency_us": 2412.9163636363637 00:31:37.252 } 00:31:37.252 ], 00:31:37.252 "core_count": 1 00:31:37.252 } 00:31:37.252 [2024-11-20 11:45:42.959724] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:31:37.252 11:45:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:31:37.510 [2024-11-20 11:45:43.106791] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:31:37.510 Running I/O for 4 seconds... 00:31:39.380 8738.00 IOPS, 34.13 MiB/s [2024-11-20T11:45:46.521Z] 8525.00 IOPS, 33.30 MiB/s [2024-11-20T11:45:47.455Z] 8226.67 IOPS, 32.14 MiB/s [2024-11-20T11:45:47.455Z] 8063.50 IOPS, 31.50 MiB/s 00:31:41.689 Latency(us) 00:31:41.689 [2024-11-20T11:45:47.455Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:41.689 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:31:41.689 ftl0 : 4.02 8053.98 31.46 0.00 0.00 15851.77 310.92 30384.87 00:31:41.689 [2024-11-20T11:45:47.455Z] =================================================================================================================== 00:31:41.689 [2024-11-20T11:45:47.455Z] Total : 8053.98 31.46 0.00 0.00 15851.77 0.00 30384.87 00:31:41.689 { 00:31:41.689 "results": [ 00:31:41.689 { 00:31:41.689 "job": "ftl0", 00:31:41.689 "core_mask": "0x1", 00:31:41.689 "workload": "randwrite", 00:31:41.689 "status": "finished", 00:31:41.690 "queue_depth": 128, 00:31:41.690 "io_size": 4096, 00:31:41.690 "runtime": 4.020371, 00:31:41.690 "iops": 8053.98307768114, 00:31:41.690 "mibps": 31.46087139719195, 00:31:41.690 "io_failed": 0, 00:31:41.690 "io_timeout": 0, 00:31:41.690 "avg_latency_us": 15851.768050760851, 00:31:41.690 "min_latency_us": 310.9236363636364, 00:31:41.690 "max_latency_us": 30384.872727272726 00:31:41.690 } 00:31:41.690 ], 00:31:41.690 "core_count": 1 00:31:41.690 } 00:31:41.690 [2024-11-20 11:45:47.137568] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:31:41.690 11:45:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:31:41.690 [2024-11-20 11:45:47.283290] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:31:41.690 Running I/O for 4 seconds... 00:31:43.559 5596.00 IOPS, 21.86 MiB/s [2024-11-20T11:45:50.705Z] 5671.50 IOPS, 22.15 MiB/s [2024-11-20T11:45:51.640Z] 5608.33 IOPS, 21.91 MiB/s [2024-11-20T11:45:51.640Z] 5561.50 IOPS, 21.72 MiB/s 00:31:45.874 Latency(us) 00:31:45.874 [2024-11-20T11:45:51.640Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:45.874 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:45.874 Verification LBA range: start 0x0 length 0x1400000 00:31:45.874 ftl0 : 4.01 5574.11 21.77 0.00 0.00 22878.66 368.64 26929.34 00:31:45.874 [2024-11-20T11:45:51.640Z] =================================================================================================================== 00:31:45.874 [2024-11-20T11:45:51.640Z] Total : 5574.11 21.77 0.00 0.00 22878.66 0.00 26929.34 00:31:45.874 [2024-11-20 11:45:51.315634] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft{ 00:31:45.874 "results": [ 00:31:45.874 { 00:31:45.874 "job": "ftl0", 00:31:45.874 "core_mask": "0x1", 00:31:45.874 "workload": "verify", 00:31:45.874 "status": "finished", 00:31:45.874 "verify_range": { 00:31:45.874 "start": 0, 00:31:45.874 "length": 20971520 00:31:45.874 }, 00:31:45.874 "queue_depth": 128, 00:31:45.874 "io_size": 4096, 00:31:45.874 "runtime": 4.013374, 00:31:45.874 "iops": 5574.1129533405065, 00:31:45.874 "mibps": 21.773878723986353, 00:31:45.874 "io_failed": 0, 00:31:45.874 "io_timeout": 0, 00:31:45.875 "avg_latency_us": 22878.659901414572, 00:31:45.875 "min_latency_us": 368.64, 00:31:45.875 "max_latency_us": 26929.33818181818 00:31:45.875 } 00:31:45.875 ], 00:31:45.875 "core_count": 1 00:31:45.875 } 00:31:45.875 l0 00:31:45.875 11:45:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:31:45.875 [2024-11-20 11:45:51.601442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.875 [2024-11-20 11:45:51.601521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:45.875 [2024-11-20 11:45:51.601579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:45.875 [2024-11-20 11:45:51.601619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.875 [2024-11-20 11:45:51.601652] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:45.875 [2024-11-20 11:45:51.605075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.875 [2024-11-20 11:45:51.605106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:45.875 [2024-11-20 11:45:51.605140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.397 ms 00:31:45.875 [2024-11-20 11:45:51.605152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.875 [2024-11-20 11:45:51.606933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.875 [2024-11-20 11:45:51.606990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:45.875 [2024-11-20 11:45:51.607026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.735 ms 00:31:45.875 [2024-11-20 11:45:51.607037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.134 [2024-11-20 11:45:51.785687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.134 [2024-11-20 11:45:51.785918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:46.134 [2024-11-20 11:45:51.785959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 178.616 ms 00:31:46.134 [2024-11-20 11:45:51.785973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.134 [2024-11-20 11:45:51.792041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.134 [2024-11-20 11:45:51.792085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:46.134 [2024-11-20 11:45:51.792120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.015 ms 00:31:46.134 [2024-11-20 11:45:51.792131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.134 [2024-11-20 11:45:51.820709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.134 [2024-11-20 11:45:51.820751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:46.134 [2024-11-20 11:45:51.820787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.499 ms 00:31:46.134 [2024-11-20 11:45:51.820799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.134 [2024-11-20 11:45:51.838482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.134 [2024-11-20 11:45:51.838526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:46.134 [2024-11-20 11:45:51.838599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.637 ms 00:31:46.134 [2024-11-20 11:45:51.838611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.134 [2024-11-20 11:45:51.838834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.134 [2024-11-20 11:45:51.838855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:46.134 [2024-11-20 11:45:51.838873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:31:46.134 [2024-11-20 11:45:51.838885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.134 [2024-11-20 11:45:51.867111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.134 [2024-11-20 11:45:51.867302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:46.134 [2024-11-20 11:45:51.867351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.201 ms 00:31:46.134 [2024-11-20 11:45:51.867363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.134 [2024-11-20 11:45:51.897741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.395 [2024-11-20 11:45:51.897959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:46.395 [2024-11-20 11:45:51.897993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.328 ms 00:31:46.395 [2024-11-20 11:45:51.898009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.395 [2024-11-20 11:45:51.927253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.395 [2024-11-20 11:45:51.927295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:46.395 [2024-11-20 11:45:51.927347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.193 ms 00:31:46.395 [2024-11-20 11:45:51.927358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.395 [2024-11-20 11:45:51.955789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.395 [2024-11-20 11:45:51.955831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:46.395 [2024-11-20 11:45:51.955869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.329 ms 00:31:46.395 [2024-11-20 11:45:51.955880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.395 [2024-11-20 11:45:51.955926] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:46.395 [2024-11-20 11:45:51.955963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:46.395 [2024-11-20 11:45:51.955979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:46.395 [2024-11-20 11:45:51.955990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:46.395 [2024-11-20 11:45:51.956003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:46.395 [2024-11-20 11:45:51.956014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:46.395 [2024-11-20 11:45:51.956028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:46.395 [2024-11-20 11:45:51.956038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:46.395 [2024-11-20 11:45:51.956051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:46.395 [2024-11-20 11:45:51.956062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:46.395 [2024-11-20 11:45:51.956075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:46.395 [2024-11-20 11:45:51.956086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:46.395 [2024-11-20 11:45:51.956100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:46.395 [2024-11-20 11:45:51.956110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:46.395 [2024-11-20 11:45:51.956125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:46.395 [2024-11-20 11:45:51.956136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:46.395 [2024-11-20 11:45:51.956150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:46.395 [2024-11-20 11:45:51.956160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:46.395 [2024-11-20 11:45:51.956175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:46.395 [2024-11-20 11:45:51.956186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:46.395 [2024-11-20 11:45:51.956199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:46.395 [2024-11-20 11:45:51.956209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.956222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.956233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.956245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.956256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.956269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.956281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.956294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.956304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.956351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.956363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.956377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.956388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.956408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.956420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.956434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.956446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.956460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.956471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.956485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.956508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.956523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.956535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.956550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.956561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.956868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.956949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.957027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.957162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.957274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.957422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.957599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.957758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.957782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.957794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.957808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.957819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.957833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.957844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.957858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.957869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.957885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.957896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.957909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.957921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.957942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.957954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.957968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.957979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.957995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.958006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.958020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.958031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.958045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.958056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.958069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.958080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.958096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.958108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.958121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.958132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.958146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.958158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.958172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.958184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.958197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.958223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.958248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.958259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.958271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.958282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.958294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.958305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.958320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.958331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.958361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.958372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.958390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.958402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.958416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:46.396 [2024-11-20 11:45:51.958436] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:46.396 [2024-11-20 11:45:51.958450] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5662df62-9eb9-479e-b18d-99c49faa47ad 00:31:46.396 [2024-11-20 11:45:51.958461] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:46.396 [2024-11-20 11:45:51.958474] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:46.396 [2024-11-20 11:45:51.958487] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:46.396 [2024-11-20 11:45:51.958500] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:46.396 [2024-11-20 11:45:51.958510] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:46.396 [2024-11-20 11:45:51.958524] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:46.396 [2024-11-20 11:45:51.958534] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:46.396 [2024-11-20 11:45:51.958565] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:46.396 [2024-11-20 11:45:51.958575] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:46.396 [2024-11-20 11:45:51.958609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.397 [2024-11-20 11:45:51.958625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:46.397 [2024-11-20 11:45:51.958641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.666 ms 00:31:46.397 [2024-11-20 11:45:51.958653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.397 [2024-11-20 11:45:51.974994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.397 [2024-11-20 11:45:51.975034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:46.397 [2024-11-20 11:45:51.975071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.275 ms 00:31:46.397 [2024-11-20 11:45:51.975082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.397 [2024-11-20 11:45:51.975531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.397 [2024-11-20 11:45:51.975547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:46.397 [2024-11-20 11:45:51.975604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:31:46.397 [2024-11-20 11:45:51.975637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.397 [2024-11-20 11:45:52.020581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:46.397 [2024-11-20 11:45:52.020652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:46.397 [2024-11-20 11:45:52.020691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:46.397 [2024-11-20 11:45:52.020702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.397 [2024-11-20 11:45:52.020770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:46.397 [2024-11-20 11:45:52.020785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:46.397 [2024-11-20 11:45:52.020799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:46.397 [2024-11-20 11:45:52.020809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.397 [2024-11-20 11:45:52.020941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:46.397 [2024-11-20 11:45:52.020965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:46.397 [2024-11-20 11:45:52.020980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:46.397 [2024-11-20 11:45:52.020991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.397 [2024-11-20 11:45:52.021017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:46.397 [2024-11-20 11:45:52.021030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:46.397 [2024-11-20 11:45:52.021044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:46.397 [2024-11-20 11:45:52.021055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.397 [2024-11-20 11:45:52.116220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:46.397 [2024-11-20 11:45:52.116287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:46.397 [2024-11-20 11:45:52.116345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:46.397 [2024-11-20 11:45:52.116357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.656 [2024-11-20 11:45:52.193445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:46.656 [2024-11-20 11:45:52.193500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:46.657 [2024-11-20 11:45:52.193538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:46.657 [2024-11-20 11:45:52.193608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.657 [2024-11-20 11:45:52.193773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:46.657 [2024-11-20 11:45:52.193794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:46.657 [2024-11-20 11:45:52.193813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:46.657 [2024-11-20 11:45:52.193825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.657 [2024-11-20 11:45:52.193892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:46.657 [2024-11-20 11:45:52.193910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:46.657 [2024-11-20 11:45:52.193925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:46.657 [2024-11-20 11:45:52.193937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.657 [2024-11-20 11:45:52.194126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:46.657 [2024-11-20 11:45:52.194145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:46.657 [2024-11-20 11:45:52.194167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:46.657 [2024-11-20 11:45:52.194178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.657 [2024-11-20 11:45:52.194233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:46.657 [2024-11-20 11:45:52.194258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:46.657 [2024-11-20 11:45:52.194274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:46.657 [2024-11-20 11:45:52.194285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.657 [2024-11-20 11:45:52.194352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:46.657 [2024-11-20 11:45:52.194367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:46.657 [2024-11-20 11:45:52.194381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:46.657 [2024-11-20 11:45:52.194396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.657 [2024-11-20 11:45:52.194456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:46.657 [2024-11-20 11:45:52.194485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:46.657 [2024-11-20 11:45:52.194501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:46.657 [2024-11-20 11:45:52.194512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.657 [2024-11-20 11:45:52.194707] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 593.199 ms, result 0 00:31:46.657 true 00:31:46.657 11:45:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 77962 00:31:46.657 11:45:52 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 77962 ']' 00:31:46.657 11:45:52 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 77962 00:31:46.657 11:45:52 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:31:46.657 11:45:52 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:46.657 11:45:52 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77962 00:31:46.657 killing process with pid 77962 00:31:46.657 Received shutdown signal, test time was about 4.000000 seconds 00:31:46.657 00:31:46.657 Latency(us) 00:31:46.657 [2024-11-20T11:45:52.423Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:46.657 [2024-11-20T11:45:52.423Z] =================================================================================================================== 00:31:46.657 [2024-11-20T11:45:52.423Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:46.657 11:45:52 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:46.657 11:45:52 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:46.657 11:45:52 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77962' 00:31:46.657 11:45:52 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 77962 00:31:46.657 11:45:52 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 77962 00:31:50.846 Remove shared memory files 00:31:50.846 11:45:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:31:50.846 11:45:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:31:50.846 11:45:55 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:50.846 11:45:55 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:31:50.846 11:45:55 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:31:50.846 11:45:55 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:31:50.846 11:45:55 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:50.846 11:45:55 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:31:50.846 ************************************ 00:31:50.846 END TEST ftl_bdevperf 00:31:50.846 ************************************ 00:31:50.846 00:31:50.846 real 0m25.876s 00:31:50.846 user 0m29.568s 00:31:50.846 sys 0m1.290s 00:31:50.846 11:45:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:50.846 11:45:55 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:50.846 11:45:55 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:31:50.846 11:45:55 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:50.846 11:45:55 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:50.846 11:45:55 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:50.846 ************************************ 00:31:50.846 START TEST ftl_trim 00:31:50.846 ************************************ 00:31:50.846 11:45:55 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:31:50.846 * Looking for test storage... 00:31:50.846 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:31:50.846 11:45:55 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:50.846 11:45:55 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:50.846 11:45:55 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:31:50.846 11:45:56 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:50.846 11:45:56 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:50.846 11:45:56 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:50.846 11:45:56 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:50.846 11:45:56 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:31:50.846 11:45:56 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:31:50.846 11:45:56 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:31:50.846 11:45:56 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:31:50.846 11:45:56 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:31:50.846 11:45:56 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:31:50.846 11:45:56 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:31:50.846 11:45:56 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:50.846 11:45:56 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:31:50.846 11:45:56 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:31:50.846 11:45:56 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:50.846 11:45:56 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:50.846 11:45:56 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:31:50.846 11:45:56 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:31:50.846 11:45:56 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:50.846 11:45:56 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:31:50.846 11:45:56 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:31:50.846 11:45:56 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:31:50.846 11:45:56 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:31:50.846 11:45:56 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:50.846 11:45:56 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:31:50.846 11:45:56 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:31:50.846 11:45:56 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:50.846 11:45:56 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:50.846 11:45:56 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:31:50.846 11:45:56 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:50.846 11:45:56 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:50.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.846 --rc genhtml_branch_coverage=1 00:31:50.846 --rc genhtml_function_coverage=1 00:31:50.846 --rc genhtml_legend=1 00:31:50.846 --rc geninfo_all_blocks=1 00:31:50.846 --rc geninfo_unexecuted_blocks=1 00:31:50.846 00:31:50.846 ' 00:31:50.846 11:45:56 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:50.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.846 --rc genhtml_branch_coverage=1 00:31:50.846 --rc genhtml_function_coverage=1 00:31:50.846 --rc genhtml_legend=1 00:31:50.846 --rc geninfo_all_blocks=1 00:31:50.846 --rc geninfo_unexecuted_blocks=1 00:31:50.846 00:31:50.846 ' 00:31:50.846 11:45:56 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:50.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.846 --rc genhtml_branch_coverage=1 00:31:50.846 --rc genhtml_function_coverage=1 00:31:50.846 --rc genhtml_legend=1 00:31:50.846 --rc geninfo_all_blocks=1 00:31:50.846 --rc geninfo_unexecuted_blocks=1 00:31:50.846 00:31:50.846 ' 00:31:50.846 11:45:56 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:50.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.846 --rc genhtml_branch_coverage=1 00:31:50.846 --rc genhtml_function_coverage=1 00:31:50.846 --rc genhtml_legend=1 00:31:50.846 --rc geninfo_all_blocks=1 00:31:50.846 --rc geninfo_unexecuted_blocks=1 00:31:50.846 00:31:50.846 ' 00:31:50.846 11:45:56 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:31:50.846 11:45:56 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:31:50.846 11:45:56 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:31:50.846 11:45:56 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:31:50.846 11:45:56 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:31:50.846 11:45:56 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:50.847 11:45:56 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:50.847 11:45:56 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:31:50.847 11:45:56 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:31:50.847 11:45:56 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:50.847 11:45:56 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:50.847 11:45:56 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:31:50.847 11:45:56 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:31:50.847 11:45:56 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:50.847 11:45:56 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:50.847 11:45:56 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:31:50.847 11:45:56 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:31:50.847 11:45:56 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:50.847 11:45:56 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:50.847 11:45:56 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:31:50.847 11:45:56 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:31:50.847 11:45:56 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:50.847 11:45:56 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:50.847 11:45:56 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:50.847 11:45:56 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:50.847 11:45:56 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:31:50.847 11:45:56 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:31:50.847 11:45:56 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:50.847 11:45:56 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:50.847 11:45:56 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:50.847 11:45:56 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:31:50.847 11:45:56 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:31:50.847 11:45:56 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:31:50.847 11:45:56 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:31:50.847 11:45:56 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:31:50.847 11:45:56 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:31:50.847 11:45:56 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:31:50.847 11:45:56 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:31:50.847 11:45:56 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:50.847 11:45:56 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:50.847 11:45:56 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:31:50.847 11:45:56 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78313 00:31:50.847 11:45:56 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:31:50.847 11:45:56 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78313 00:31:50.847 11:45:56 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78313 ']' 00:31:50.847 11:45:56 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:50.847 11:45:56 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:50.847 11:45:56 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:50.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:50.847 11:45:56 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:50.847 11:45:56 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:31:50.847 [2024-11-20 11:45:56.300362] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:31:50.847 [2024-11-20 11:45:56.300568] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78313 ] 00:31:50.847 [2024-11-20 11:45:56.485419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:50.847 [2024-11-20 11:45:56.609355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:50.847 [2024-11-20 11:45:56.609441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:51.105 [2024-11-20 11:45:56.609460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:51.671 11:45:57 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:51.671 11:45:57 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:31:51.671 11:45:57 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:31:51.671 11:45:57 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:31:51.671 11:45:57 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:51.671 11:45:57 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:31:51.671 11:45:57 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:31:51.929 11:45:57 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:52.187 11:45:57 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:31:52.187 11:45:57 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:31:52.187 11:45:57 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:31:52.187 11:45:57 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:31:52.187 11:45:57 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:52.187 11:45:57 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:31:52.187 11:45:57 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:31:52.187 11:45:57 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:31:52.446 11:45:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:52.446 { 00:31:52.446 "name": "nvme0n1", 00:31:52.446 "aliases": [ 00:31:52.446 "a500bb3e-2db8-4e4b-802c-981abe295dfc" 00:31:52.446 ], 00:31:52.446 "product_name": "NVMe disk", 00:31:52.446 "block_size": 4096, 00:31:52.446 "num_blocks": 1310720, 00:31:52.446 "uuid": "a500bb3e-2db8-4e4b-802c-981abe295dfc", 00:31:52.446 "numa_id": -1, 00:31:52.446 "assigned_rate_limits": { 00:31:52.446 "rw_ios_per_sec": 0, 00:31:52.446 "rw_mbytes_per_sec": 0, 00:31:52.446 "r_mbytes_per_sec": 0, 00:31:52.446 "w_mbytes_per_sec": 0 00:31:52.446 }, 00:31:52.446 "claimed": true, 00:31:52.446 "claim_type": "read_many_write_one", 00:31:52.446 "zoned": false, 00:31:52.446 "supported_io_types": { 00:31:52.446 "read": true, 00:31:52.446 "write": true, 00:31:52.446 "unmap": true, 00:31:52.446 "flush": true, 00:31:52.446 "reset": true, 00:31:52.446 "nvme_admin": true, 00:31:52.446 "nvme_io": true, 00:31:52.446 "nvme_io_md": false, 00:31:52.446 "write_zeroes": true, 00:31:52.446 "zcopy": false, 00:31:52.446 "get_zone_info": false, 00:31:52.446 "zone_management": false, 00:31:52.446 "zone_append": false, 00:31:52.446 "compare": true, 00:31:52.446 "compare_and_write": false, 00:31:52.446 "abort": true, 00:31:52.446 "seek_hole": false, 00:31:52.446 "seek_data": false, 00:31:52.446 "copy": true, 00:31:52.446 "nvme_iov_md": false 00:31:52.446 }, 00:31:52.446 "driver_specific": { 00:31:52.446 "nvme": [ 00:31:52.446 { 00:31:52.446 "pci_address": "0000:00:11.0", 00:31:52.447 "trid": { 00:31:52.447 "trtype": "PCIe", 00:31:52.447 "traddr": "0000:00:11.0" 00:31:52.447 }, 00:31:52.447 "ctrlr_data": { 00:31:52.447 "cntlid": 0, 00:31:52.447 "vendor_id": "0x1b36", 00:31:52.447 "model_number": "QEMU NVMe Ctrl", 00:31:52.447 "serial_number": "12341", 00:31:52.447 "firmware_revision": "8.0.0", 00:31:52.447 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:52.447 "oacs": { 00:31:52.447 "security": 0, 00:31:52.447 "format": 1, 00:31:52.447 "firmware": 0, 00:31:52.447 "ns_manage": 1 00:31:52.447 }, 00:31:52.447 "multi_ctrlr": false, 00:31:52.447 "ana_reporting": false 00:31:52.447 }, 00:31:52.447 "vs": { 00:31:52.447 "nvme_version": "1.4" 00:31:52.447 }, 00:31:52.447 "ns_data": { 00:31:52.447 "id": 1, 00:31:52.447 "can_share": false 00:31:52.447 } 00:31:52.447 } 00:31:52.447 ], 00:31:52.447 "mp_policy": "active_passive" 00:31:52.447 } 00:31:52.447 } 00:31:52.447 ]' 00:31:52.447 11:45:58 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:52.447 11:45:58 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:31:52.447 11:45:58 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:52.705 11:45:58 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:31:52.705 11:45:58 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:31:52.706 11:45:58 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:31:52.706 11:45:58 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:31:52.706 11:45:58 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:31:52.706 11:45:58 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:31:52.706 11:45:58 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:52.706 11:45:58 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:52.964 11:45:58 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=c60f5b4a-20ef-4a4c-ae8f-a293273190f8 00:31:52.964 11:45:58 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:31:52.964 11:45:58 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c60f5b4a-20ef-4a4c-ae8f-a293273190f8 00:31:53.223 11:45:58 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:31:53.481 11:45:59 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=ab1c356b-66e4-421a-9a40-1c365cb70cdb 00:31:53.481 11:45:59 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ab1c356b-66e4-421a-9a40-1c365cb70cdb 00:31:53.740 11:45:59 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=c95b1762-6cc6-4b3b-9175-de182273afca 00:31:53.740 11:45:59 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c95b1762-6cc6-4b3b-9175-de182273afca 00:31:53.740 11:45:59 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:31:53.740 11:45:59 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:53.740 11:45:59 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=c95b1762-6cc6-4b3b-9175-de182273afca 00:31:53.740 11:45:59 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:31:53.740 11:45:59 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size c95b1762-6cc6-4b3b-9175-de182273afca 00:31:53.740 11:45:59 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=c95b1762-6cc6-4b3b-9175-de182273afca 00:31:53.740 11:45:59 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:53.740 11:45:59 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:31:53.740 11:45:59 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:31:53.740 11:45:59 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c95b1762-6cc6-4b3b-9175-de182273afca 00:31:53.998 11:45:59 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:53.998 { 00:31:53.998 "name": "c95b1762-6cc6-4b3b-9175-de182273afca", 00:31:53.998 "aliases": [ 00:31:53.998 "lvs/nvme0n1p0" 00:31:53.998 ], 00:31:53.998 "product_name": "Logical Volume", 00:31:53.998 "block_size": 4096, 00:31:53.998 "num_blocks": 26476544, 00:31:53.998 "uuid": "c95b1762-6cc6-4b3b-9175-de182273afca", 00:31:53.998 "assigned_rate_limits": { 00:31:53.998 "rw_ios_per_sec": 0, 00:31:53.998 "rw_mbytes_per_sec": 0, 00:31:53.998 "r_mbytes_per_sec": 0, 00:31:53.998 "w_mbytes_per_sec": 0 00:31:53.998 }, 00:31:53.998 "claimed": false, 00:31:53.998 "zoned": false, 00:31:53.998 "supported_io_types": { 00:31:53.998 "read": true, 00:31:53.998 "write": true, 00:31:53.998 "unmap": true, 00:31:53.998 "flush": false, 00:31:53.998 "reset": true, 00:31:53.998 "nvme_admin": false, 00:31:53.998 "nvme_io": false, 00:31:53.998 "nvme_io_md": false, 00:31:53.998 "write_zeroes": true, 00:31:53.998 "zcopy": false, 00:31:53.998 "get_zone_info": false, 00:31:53.998 "zone_management": false, 00:31:53.998 "zone_append": false, 00:31:53.998 "compare": false, 00:31:53.998 "compare_and_write": false, 00:31:53.998 "abort": false, 00:31:53.998 "seek_hole": true, 00:31:53.998 "seek_data": true, 00:31:53.999 "copy": false, 00:31:53.999 "nvme_iov_md": false 00:31:53.999 }, 00:31:53.999 "driver_specific": { 00:31:53.999 "lvol": { 00:31:53.999 "lvol_store_uuid": "ab1c356b-66e4-421a-9a40-1c365cb70cdb", 00:31:53.999 "base_bdev": "nvme0n1", 00:31:53.999 "thin_provision": true, 00:31:53.999 "num_allocated_clusters": 0, 00:31:53.999 "snapshot": false, 00:31:53.999 "clone": false, 00:31:53.999 "esnap_clone": false 00:31:53.999 } 00:31:53.999 } 00:31:53.999 } 00:31:53.999 ]' 00:31:53.999 11:45:59 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:53.999 11:45:59 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:31:53.999 11:45:59 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:53.999 11:45:59 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:53.999 11:45:59 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:53.999 11:45:59 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:31:53.999 11:45:59 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:31:53.999 11:45:59 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:31:53.999 11:45:59 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:31:54.258 11:46:00 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:31:54.258 11:46:00 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:31:54.258 11:46:00 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size c95b1762-6cc6-4b3b-9175-de182273afca 00:31:54.258 11:46:00 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=c95b1762-6cc6-4b3b-9175-de182273afca 00:31:54.258 11:46:00 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:54.258 11:46:00 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:31:54.258 11:46:00 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:31:54.258 11:46:00 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c95b1762-6cc6-4b3b-9175-de182273afca 00:31:54.824 11:46:00 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:54.824 { 00:31:54.824 "name": "c95b1762-6cc6-4b3b-9175-de182273afca", 00:31:54.824 "aliases": [ 00:31:54.824 "lvs/nvme0n1p0" 00:31:54.824 ], 00:31:54.824 "product_name": "Logical Volume", 00:31:54.824 "block_size": 4096, 00:31:54.824 "num_blocks": 26476544, 00:31:54.824 "uuid": "c95b1762-6cc6-4b3b-9175-de182273afca", 00:31:54.824 "assigned_rate_limits": { 00:31:54.824 "rw_ios_per_sec": 0, 00:31:54.824 "rw_mbytes_per_sec": 0, 00:31:54.824 "r_mbytes_per_sec": 0, 00:31:54.824 "w_mbytes_per_sec": 0 00:31:54.824 }, 00:31:54.824 "claimed": false, 00:31:54.824 "zoned": false, 00:31:54.824 "supported_io_types": { 00:31:54.824 "read": true, 00:31:54.824 "write": true, 00:31:54.824 "unmap": true, 00:31:54.824 "flush": false, 00:31:54.824 "reset": true, 00:31:54.824 "nvme_admin": false, 00:31:54.824 "nvme_io": false, 00:31:54.824 "nvme_io_md": false, 00:31:54.824 "write_zeroes": true, 00:31:54.824 "zcopy": false, 00:31:54.824 "get_zone_info": false, 00:31:54.824 "zone_management": false, 00:31:54.824 "zone_append": false, 00:31:54.824 "compare": false, 00:31:54.824 "compare_and_write": false, 00:31:54.824 "abort": false, 00:31:54.824 "seek_hole": true, 00:31:54.824 "seek_data": true, 00:31:54.824 "copy": false, 00:31:54.824 "nvme_iov_md": false 00:31:54.824 }, 00:31:54.824 "driver_specific": { 00:31:54.824 "lvol": { 00:31:54.824 "lvol_store_uuid": "ab1c356b-66e4-421a-9a40-1c365cb70cdb", 00:31:54.824 "base_bdev": "nvme0n1", 00:31:54.824 "thin_provision": true, 00:31:54.824 "num_allocated_clusters": 0, 00:31:54.824 "snapshot": false, 00:31:54.824 "clone": false, 00:31:54.824 "esnap_clone": false 00:31:54.824 } 00:31:54.824 } 00:31:54.824 } 00:31:54.824 ]' 00:31:54.824 11:46:00 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:54.824 11:46:00 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:31:54.824 11:46:00 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:54.824 11:46:00 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:54.824 11:46:00 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:54.824 11:46:00 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:31:54.824 11:46:00 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:31:54.824 11:46:00 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:31:55.082 11:46:00 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:31:55.082 11:46:00 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:31:55.082 11:46:00 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size c95b1762-6cc6-4b3b-9175-de182273afca 00:31:55.082 11:46:00 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=c95b1762-6cc6-4b3b-9175-de182273afca 00:31:55.082 11:46:00 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:55.082 11:46:00 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:31:55.082 11:46:00 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:31:55.082 11:46:00 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c95b1762-6cc6-4b3b-9175-de182273afca 00:31:55.341 11:46:00 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:55.341 { 00:31:55.341 "name": "c95b1762-6cc6-4b3b-9175-de182273afca", 00:31:55.341 "aliases": [ 00:31:55.341 "lvs/nvme0n1p0" 00:31:55.341 ], 00:31:55.341 "product_name": "Logical Volume", 00:31:55.341 "block_size": 4096, 00:31:55.341 "num_blocks": 26476544, 00:31:55.341 "uuid": "c95b1762-6cc6-4b3b-9175-de182273afca", 00:31:55.341 "assigned_rate_limits": { 00:31:55.341 "rw_ios_per_sec": 0, 00:31:55.341 "rw_mbytes_per_sec": 0, 00:31:55.341 "r_mbytes_per_sec": 0, 00:31:55.341 "w_mbytes_per_sec": 0 00:31:55.341 }, 00:31:55.341 "claimed": false, 00:31:55.341 "zoned": false, 00:31:55.341 "supported_io_types": { 00:31:55.341 "read": true, 00:31:55.341 "write": true, 00:31:55.341 "unmap": true, 00:31:55.341 "flush": false, 00:31:55.341 "reset": true, 00:31:55.341 "nvme_admin": false, 00:31:55.341 "nvme_io": false, 00:31:55.341 "nvme_io_md": false, 00:31:55.341 "write_zeroes": true, 00:31:55.341 "zcopy": false, 00:31:55.341 "get_zone_info": false, 00:31:55.341 "zone_management": false, 00:31:55.341 "zone_append": false, 00:31:55.341 "compare": false, 00:31:55.341 "compare_and_write": false, 00:31:55.341 "abort": false, 00:31:55.341 "seek_hole": true, 00:31:55.341 "seek_data": true, 00:31:55.341 "copy": false, 00:31:55.341 "nvme_iov_md": false 00:31:55.341 }, 00:31:55.341 "driver_specific": { 00:31:55.341 "lvol": { 00:31:55.341 "lvol_store_uuid": "ab1c356b-66e4-421a-9a40-1c365cb70cdb", 00:31:55.341 "base_bdev": "nvme0n1", 00:31:55.341 "thin_provision": true, 00:31:55.341 "num_allocated_clusters": 0, 00:31:55.341 "snapshot": false, 00:31:55.341 "clone": false, 00:31:55.341 "esnap_clone": false 00:31:55.341 } 00:31:55.341 } 00:31:55.341 } 00:31:55.341 ]' 00:31:55.341 11:46:00 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:55.341 11:46:01 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:31:55.341 11:46:01 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:55.341 11:46:01 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:55.341 11:46:01 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:55.341 11:46:01 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:31:55.341 11:46:01 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:31:55.341 11:46:01 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c95b1762-6cc6-4b3b-9175-de182273afca -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:31:55.600 [2024-11-20 11:46:01.332483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.600 [2024-11-20 11:46:01.332588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:55.600 [2024-11-20 11:46:01.332617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:55.600 [2024-11-20 11:46:01.332631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.600 [2024-11-20 11:46:01.336918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.600 [2024-11-20 11:46:01.337136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:55.600 [2024-11-20 11:46:01.337305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.229 ms 00:31:55.600 [2024-11-20 11:46:01.337443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.600 [2024-11-20 11:46:01.337780] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:55.600 [2024-11-20 11:46:01.338979] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:55.600 [2024-11-20 11:46:01.339201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.601 [2024-11-20 11:46:01.339337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:55.601 [2024-11-20 11:46:01.339376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.429 ms 00:31:55.601 [2024-11-20 11:46:01.339391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.601 [2024-11-20 11:46:01.339860] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e3116d21-5f36-46d4-8ab1-bab032ddcd4c 00:31:55.601 [2024-11-20 11:46:01.342410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.601 [2024-11-20 11:46:01.342454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:31:55.601 [2024-11-20 11:46:01.342474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:31:55.601 [2024-11-20 11:46:01.342491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.601 [2024-11-20 11:46:01.356887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.601 [2024-11-20 11:46:01.357278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:55.601 [2024-11-20 11:46:01.357316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.247 ms 00:31:55.601 [2024-11-20 11:46:01.357338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.601 [2024-11-20 11:46:01.357624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.601 [2024-11-20 11:46:01.357671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:55.601 [2024-11-20 11:46:01.357687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:31:55.601 [2024-11-20 11:46:01.357709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.601 [2024-11-20 11:46:01.357786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.601 [2024-11-20 11:46:01.357806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:55.601 [2024-11-20 11:46:01.357819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:31:55.601 [2024-11-20 11:46:01.357834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.601 [2024-11-20 11:46:01.357898] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:31:55.601 [2024-11-20 11:46:01.364138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.859 [2024-11-20 11:46:01.364337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:55.859 [2024-11-20 11:46:01.364400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.248 ms 00:31:55.859 [2024-11-20 11:46:01.364415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.859 [2024-11-20 11:46:01.364521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.859 [2024-11-20 11:46:01.364562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:55.859 [2024-11-20 11:46:01.364583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:55.859 [2024-11-20 11:46:01.364617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.859 [2024-11-20 11:46:01.364671] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:31:55.859 [2024-11-20 11:46:01.364880] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:55.859 [2024-11-20 11:46:01.364922] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:55.859 [2024-11-20 11:46:01.364945] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:55.859 [2024-11-20 11:46:01.364980] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:55.859 [2024-11-20 11:46:01.364996] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:55.859 [2024-11-20 11:46:01.365013] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:31:55.859 [2024-11-20 11:46:01.365026] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:55.859 [2024-11-20 11:46:01.365041] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:55.859 [2024-11-20 11:46:01.365056] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:55.859 [2024-11-20 11:46:01.365073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.859 [2024-11-20 11:46:01.365089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:55.859 [2024-11-20 11:46:01.365117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:31:55.859 [2024-11-20 11:46:01.365140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.860 [2024-11-20 11:46:01.365295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.860 [2024-11-20 11:46:01.365313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:55.860 [2024-11-20 11:46:01.365330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:31:55.860 [2024-11-20 11:46:01.365343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.860 [2024-11-20 11:46:01.365505] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:55.860 [2024-11-20 11:46:01.365523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:55.860 [2024-11-20 11:46:01.365556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:55.860 [2024-11-20 11:46:01.365613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:55.860 [2024-11-20 11:46:01.365655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:55.860 [2024-11-20 11:46:01.365666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:55.860 [2024-11-20 11:46:01.365681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:31:55.860 [2024-11-20 11:46:01.365693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:55.860 [2024-11-20 11:46:01.365707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:31:55.860 [2024-11-20 11:46:01.365718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:55.860 [2024-11-20 11:46:01.365732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:55.860 [2024-11-20 11:46:01.365743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:31:55.860 [2024-11-20 11:46:01.365759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:55.860 [2024-11-20 11:46:01.365771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:55.860 [2024-11-20 11:46:01.365785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:31:55.860 [2024-11-20 11:46:01.365796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:55.860 [2024-11-20 11:46:01.365813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:55.860 [2024-11-20 11:46:01.365825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:31:55.860 [2024-11-20 11:46:01.365838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:55.860 [2024-11-20 11:46:01.365851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:55.860 [2024-11-20 11:46:01.365866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:31:55.860 [2024-11-20 11:46:01.365878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:55.860 [2024-11-20 11:46:01.365894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:55.860 [2024-11-20 11:46:01.365905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:31:55.860 [2024-11-20 11:46:01.365919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:55.860 [2024-11-20 11:46:01.365930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:55.860 [2024-11-20 11:46:01.365944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:31:55.860 [2024-11-20 11:46:01.365955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:55.860 [2024-11-20 11:46:01.365968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:55.860 [2024-11-20 11:46:01.365979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:31:55.860 [2024-11-20 11:46:01.365993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:55.860 [2024-11-20 11:46:01.366004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:55.860 [2024-11-20 11:46:01.366021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:31:55.860 [2024-11-20 11:46:01.366032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:55.860 [2024-11-20 11:46:01.366046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:55.860 [2024-11-20 11:46:01.366057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:31:55.860 [2024-11-20 11:46:01.366071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:55.860 [2024-11-20 11:46:01.366083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:55.860 [2024-11-20 11:46:01.366097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:31:55.860 [2024-11-20 11:46:01.366108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:55.860 [2024-11-20 11:46:01.366122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:55.860 [2024-11-20 11:46:01.366133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:31:55.860 [2024-11-20 11:46:01.366147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:55.860 [2024-11-20 11:46:01.366158] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:55.860 [2024-11-20 11:46:01.366174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:55.860 [2024-11-20 11:46:01.366202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:55.860 [2024-11-20 11:46:01.366218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:55.860 [2024-11-20 11:46:01.366232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:55.860 [2024-11-20 11:46:01.366252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:55.860 [2024-11-20 11:46:01.366263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:55.860 [2024-11-20 11:46:01.366295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:55.860 [2024-11-20 11:46:01.366307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:55.860 [2024-11-20 11:46:01.366333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:55.860 [2024-11-20 11:46:01.366351] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:55.860 [2024-11-20 11:46:01.366372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:55.860 [2024-11-20 11:46:01.366386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:31:55.860 [2024-11-20 11:46:01.366402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:31:55.860 [2024-11-20 11:46:01.366414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:31:55.860 [2024-11-20 11:46:01.366430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:31:55.860 [2024-11-20 11:46:01.366442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:31:55.860 [2024-11-20 11:46:01.366457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:31:55.860 [2024-11-20 11:46:01.366470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:31:55.860 [2024-11-20 11:46:01.366485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:31:55.860 [2024-11-20 11:46:01.366497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:31:55.860 [2024-11-20 11:46:01.366534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:31:55.860 [2024-11-20 11:46:01.366547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:31:55.860 [2024-11-20 11:46:01.366562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:31:55.860 [2024-11-20 11:46:01.366590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:31:55.860 [2024-11-20 11:46:01.366625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:31:55.860 [2024-11-20 11:46:01.366638] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:55.860 [2024-11-20 11:46:01.366687] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:55.860 [2024-11-20 11:46:01.366700] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:55.860 [2024-11-20 11:46:01.366715] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:55.860 [2024-11-20 11:46:01.366726] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:55.860 [2024-11-20 11:46:01.366742] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:55.860 [2024-11-20 11:46:01.366756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.860 [2024-11-20 11:46:01.366770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:55.860 [2024-11-20 11:46:01.366783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.325 ms 00:31:55.860 [2024-11-20 11:46:01.366798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.860 [2024-11-20 11:46:01.366952] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:31:55.860 [2024-11-20 11:46:01.366976] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:31:59.148 [2024-11-20 11:46:04.331608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.148 [2024-11-20 11:46:04.331953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:31:59.148 [2024-11-20 11:46:04.332092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2964.665 ms 00:31:59.148 [2024-11-20 11:46:04.332166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.148 [2024-11-20 11:46:04.371072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.148 [2024-11-20 11:46:04.371349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:59.148 [2024-11-20 11:46:04.371483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.420 ms 00:31:59.148 [2024-11-20 11:46:04.371672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.148 [2024-11-20 11:46:04.371944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.148 [2024-11-20 11:46:04.372097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:59.148 [2024-11-20 11:46:04.372240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:31:59.148 [2024-11-20 11:46:04.372305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.148 [2024-11-20 11:46:04.424041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.148 [2024-11-20 11:46:04.424297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:59.148 [2024-11-20 11:46:04.424463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.536 ms 00:31:59.148 [2024-11-20 11:46:04.424526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.148 [2024-11-20 11:46:04.424814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.148 [2024-11-20 11:46:04.424981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:59.148 [2024-11-20 11:46:04.425114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:59.148 [2024-11-20 11:46:04.425173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.148 [2024-11-20 11:46:04.426014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.148 [2024-11-20 11:46:04.426175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:59.148 [2024-11-20 11:46:04.426293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.648 ms 00:31:59.148 [2024-11-20 11:46:04.426347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.148 [2024-11-20 11:46:04.426653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.148 [2024-11-20 11:46:04.426724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:59.148 [2024-11-20 11:46:04.426884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:31:59.148 [2024-11-20 11:46:04.427061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.148 [2024-11-20 11:46:04.447777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.148 [2024-11-20 11:46:04.447989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:59.148 [2024-11-20 11:46:04.448105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.540 ms 00:31:59.148 [2024-11-20 11:46:04.448161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.148 [2024-11-20 11:46:04.461824] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:31:59.148 [2024-11-20 11:46:04.482764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.148 [2024-11-20 11:46:04.483052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:59.148 [2024-11-20 11:46:04.483176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.414 ms 00:31:59.148 [2024-11-20 11:46:04.483302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.148 [2024-11-20 11:46:04.563225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.148 [2024-11-20 11:46:04.563574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:31:59.148 [2024-11-20 11:46:04.563720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.709 ms 00:31:59.148 [2024-11-20 11:46:04.563844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.148 [2024-11-20 11:46:04.564254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.148 [2024-11-20 11:46:04.564410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:59.148 [2024-11-20 11:46:04.564586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:31:59.148 [2024-11-20 11:46:04.564651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.148 [2024-11-20 11:46:04.592570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.148 [2024-11-20 11:46:04.592756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:31:59.148 [2024-11-20 11:46:04.592806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.795 ms 00:31:59.148 [2024-11-20 11:46:04.592821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.148 [2024-11-20 11:46:04.620230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.148 [2024-11-20 11:46:04.620273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:31:59.148 [2024-11-20 11:46:04.620310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.280 ms 00:31:59.148 [2024-11-20 11:46:04.620322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.148 [2024-11-20 11:46:04.621286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.148 [2024-11-20 11:46:04.621500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:59.148 [2024-11-20 11:46:04.621587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.854 ms 00:31:59.148 [2024-11-20 11:46:04.621623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.148 [2024-11-20 11:46:04.704668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.148 [2024-11-20 11:46:04.704723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:31:59.148 [2024-11-20 11:46:04.704770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.946 ms 00:31:59.148 [2024-11-20 11:46:04.704783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.148 [2024-11-20 11:46:04.734486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.149 [2024-11-20 11:46:04.734529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:31:59.149 [2024-11-20 11:46:04.734600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.507 ms 00:31:59.149 [2024-11-20 11:46:04.734613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.149 [2024-11-20 11:46:04.762377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.149 [2024-11-20 11:46:04.762419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:31:59.149 [2024-11-20 11:46:04.762455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.592 ms 00:31:59.149 [2024-11-20 11:46:04.762466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.149 [2024-11-20 11:46:04.790945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.149 [2024-11-20 11:46:04.790988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:59.149 [2024-11-20 11:46:04.791024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.331 ms 00:31:59.149 [2024-11-20 11:46:04.791056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.149 [2024-11-20 11:46:04.791169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.149 [2024-11-20 11:46:04.791191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:59.149 [2024-11-20 11:46:04.791210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:59.149 [2024-11-20 11:46:04.791221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.149 [2024-11-20 11:46:04.791346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.149 [2024-11-20 11:46:04.791362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:59.149 [2024-11-20 11:46:04.791377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:31:59.149 [2024-11-20 11:46:04.791388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.149 [2024-11-20 11:46:04.792929] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:59.149 [2024-11-20 11:46:04.796695] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3459.954 ms, result 0 00:31:59.149 [2024-11-20 11:46:04.797788] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:59.149 { 00:31:59.149 "name": "ftl0", 00:31:59.149 "uuid": "e3116d21-5f36-46d4-8ab1-bab032ddcd4c" 00:31:59.149 } 00:31:59.149 11:46:04 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:31:59.149 11:46:04 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:31:59.149 11:46:04 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:59.149 11:46:04 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:31:59.149 11:46:04 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:59.149 11:46:04 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:59.149 11:46:04 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:59.407 11:46:05 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:31:59.666 [ 00:31:59.666 { 00:31:59.666 "name": "ftl0", 00:31:59.666 "aliases": [ 00:31:59.666 "e3116d21-5f36-46d4-8ab1-bab032ddcd4c" 00:31:59.666 ], 00:31:59.666 "product_name": "FTL disk", 00:31:59.666 "block_size": 4096, 00:31:59.666 "num_blocks": 23592960, 00:31:59.666 "uuid": "e3116d21-5f36-46d4-8ab1-bab032ddcd4c", 00:31:59.666 "assigned_rate_limits": { 00:31:59.666 "rw_ios_per_sec": 0, 00:31:59.666 "rw_mbytes_per_sec": 0, 00:31:59.666 "r_mbytes_per_sec": 0, 00:31:59.666 "w_mbytes_per_sec": 0 00:31:59.666 }, 00:31:59.666 "claimed": false, 00:31:59.666 "zoned": false, 00:31:59.666 "supported_io_types": { 00:31:59.666 "read": true, 00:31:59.666 "write": true, 00:31:59.666 "unmap": true, 00:31:59.666 "flush": true, 00:31:59.666 "reset": false, 00:31:59.666 "nvme_admin": false, 00:31:59.666 "nvme_io": false, 00:31:59.666 "nvme_io_md": false, 00:31:59.666 "write_zeroes": true, 00:31:59.666 "zcopy": false, 00:31:59.666 "get_zone_info": false, 00:31:59.666 "zone_management": false, 00:31:59.666 "zone_append": false, 00:31:59.666 "compare": false, 00:31:59.666 "compare_and_write": false, 00:31:59.666 "abort": false, 00:31:59.666 "seek_hole": false, 00:31:59.666 "seek_data": false, 00:31:59.666 "copy": false, 00:31:59.666 "nvme_iov_md": false 00:31:59.666 }, 00:31:59.666 "driver_specific": { 00:31:59.666 "ftl": { 00:31:59.666 "base_bdev": "c95b1762-6cc6-4b3b-9175-de182273afca", 00:31:59.666 "cache": "nvc0n1p0" 00:31:59.666 } 00:31:59.666 } 00:31:59.666 } 00:31:59.666 ] 00:31:59.924 11:46:05 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:31:59.924 11:46:05 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:31:59.924 11:46:05 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:32:00.183 11:46:05 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:32:00.183 11:46:05 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:32:00.441 11:46:05 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:32:00.441 { 00:32:00.441 "name": "ftl0", 00:32:00.441 "aliases": [ 00:32:00.441 "e3116d21-5f36-46d4-8ab1-bab032ddcd4c" 00:32:00.441 ], 00:32:00.441 "product_name": "FTL disk", 00:32:00.441 "block_size": 4096, 00:32:00.441 "num_blocks": 23592960, 00:32:00.441 "uuid": "e3116d21-5f36-46d4-8ab1-bab032ddcd4c", 00:32:00.441 "assigned_rate_limits": { 00:32:00.441 "rw_ios_per_sec": 0, 00:32:00.441 "rw_mbytes_per_sec": 0, 00:32:00.441 "r_mbytes_per_sec": 0, 00:32:00.441 "w_mbytes_per_sec": 0 00:32:00.441 }, 00:32:00.441 "claimed": false, 00:32:00.441 "zoned": false, 00:32:00.441 "supported_io_types": { 00:32:00.441 "read": true, 00:32:00.441 "write": true, 00:32:00.441 "unmap": true, 00:32:00.441 "flush": true, 00:32:00.441 "reset": false, 00:32:00.441 "nvme_admin": false, 00:32:00.441 "nvme_io": false, 00:32:00.441 "nvme_io_md": false, 00:32:00.441 "write_zeroes": true, 00:32:00.441 "zcopy": false, 00:32:00.441 "get_zone_info": false, 00:32:00.441 "zone_management": false, 00:32:00.441 "zone_append": false, 00:32:00.441 "compare": false, 00:32:00.441 "compare_and_write": false, 00:32:00.441 "abort": false, 00:32:00.441 "seek_hole": false, 00:32:00.441 "seek_data": false, 00:32:00.441 "copy": false, 00:32:00.441 "nvme_iov_md": false 00:32:00.441 }, 00:32:00.441 "driver_specific": { 00:32:00.441 "ftl": { 00:32:00.441 "base_bdev": "c95b1762-6cc6-4b3b-9175-de182273afca", 00:32:00.441 "cache": "nvc0n1p0" 00:32:00.441 } 00:32:00.441 } 00:32:00.441 } 00:32:00.441 ]' 00:32:00.441 11:46:05 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:32:00.441 11:46:06 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:32:00.441 11:46:06 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:32:00.700 [2024-11-20 11:46:06.287716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.700 [2024-11-20 11:46:06.287793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:00.700 [2024-11-20 11:46:06.287819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:00.700 [2024-11-20 11:46:06.287839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.700 [2024-11-20 11:46:06.287914] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:32:00.700 [2024-11-20 11:46:06.291590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.700 [2024-11-20 11:46:06.291624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:00.700 [2024-11-20 11:46:06.291646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.640 ms 00:32:00.700 [2024-11-20 11:46:06.291659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.700 [2024-11-20 11:46:06.292602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.700 [2024-11-20 11:46:06.292639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:00.700 [2024-11-20 11:46:06.292659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.846 ms 00:32:00.700 [2024-11-20 11:46:06.292672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.700 [2024-11-20 11:46:06.296343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.700 [2024-11-20 11:46:06.296378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:00.700 [2024-11-20 11:46:06.296413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.581 ms 00:32:00.700 [2024-11-20 11:46:06.296425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.700 [2024-11-20 11:46:06.303425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.700 [2024-11-20 11:46:06.303459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:00.700 [2024-11-20 11:46:06.303494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.919 ms 00:32:00.700 [2024-11-20 11:46:06.303506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.700 [2024-11-20 11:46:06.333283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.700 [2024-11-20 11:46:06.333328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:00.700 [2024-11-20 11:46:06.333376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.575 ms 00:32:00.700 [2024-11-20 11:46:06.333390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.700 [2024-11-20 11:46:06.352139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.700 [2024-11-20 11:46:06.352184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:00.700 [2024-11-20 11:46:06.352224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.640 ms 00:32:00.700 [2024-11-20 11:46:06.352240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.700 [2024-11-20 11:46:06.352591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.700 [2024-11-20 11:46:06.352615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:00.700 [2024-11-20 11:46:06.352632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:32:00.700 [2024-11-20 11:46:06.352655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.700 [2024-11-20 11:46:06.381720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.700 [2024-11-20 11:46:06.381763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:00.700 [2024-11-20 11:46:06.381801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.990 ms 00:32:00.700 [2024-11-20 11:46:06.381813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.700 [2024-11-20 11:46:06.410821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.700 [2024-11-20 11:46:06.410864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:00.700 [2024-11-20 11:46:06.410904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.884 ms 00:32:00.700 [2024-11-20 11:46:06.410916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.700 [2024-11-20 11:46:06.438828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.700 [2024-11-20 11:46:06.439066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:00.700 [2024-11-20 11:46:06.439102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.795 ms 00:32:00.700 [2024-11-20 11:46:06.439115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.058 [2024-11-20 11:46:06.467292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.058 [2024-11-20 11:46:06.467336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:01.058 [2024-11-20 11:46:06.467373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.920 ms 00:32:01.058 [2024-11-20 11:46:06.467384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.058 [2024-11-20 11:46:06.467518] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:01.058 [2024-11-20 11:46:06.467582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.467603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.467616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.467630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.467642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.467669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.467697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.467712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.467724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.467738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.467751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.467765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.467777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.467791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.467803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.467817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.467829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.467843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.467855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.467868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.467880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.467922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.467935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.467966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.467978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.468999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.469015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:01.058 [2024-11-20 11:46:06.469035] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:01.058 [2024-11-20 11:46:06.469053] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e3116d21-5f36-46d4-8ab1-bab032ddcd4c 00:32:01.058 [2024-11-20 11:46:06.469066] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:01.058 [2024-11-20 11:46:06.469079] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:01.058 [2024-11-20 11:46:06.469090] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:01.058 [2024-11-20 11:46:06.469104] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:01.058 [2024-11-20 11:46:06.469118] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:01.058 [2024-11-20 11:46:06.469131] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:01.058 [2024-11-20 11:46:06.469142] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:01.058 [2024-11-20 11:46:06.469155] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:01.058 [2024-11-20 11:46:06.469165] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:01.058 [2024-11-20 11:46:06.469180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.058 [2024-11-20 11:46:06.469191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:01.058 [2024-11-20 11:46:06.469206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.667 ms 00:32:01.058 [2024-11-20 11:46:06.469246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.058 [2024-11-20 11:46:06.485201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.058 [2024-11-20 11:46:06.485427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:01.058 [2024-11-20 11:46:06.485489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.895 ms 00:32:01.058 [2024-11-20 11:46:06.485503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.058 [2024-11-20 11:46:06.486142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.058 [2024-11-20 11:46:06.486170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:01.058 [2024-11-20 11:46:06.486205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.461 ms 00:32:01.058 [2024-11-20 11:46:06.486217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.058 [2024-11-20 11:46:06.545118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.058 [2024-11-20 11:46:06.545177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:01.058 [2024-11-20 11:46:06.545225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.058 [2024-11-20 11:46:06.545256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.058 [2024-11-20 11:46:06.545437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.058 [2024-11-20 11:46:06.545458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:01.058 [2024-11-20 11:46:06.545475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.058 [2024-11-20 11:46:06.545487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.058 [2024-11-20 11:46:06.545618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.058 [2024-11-20 11:46:06.545640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:01.058 [2024-11-20 11:46:06.545694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.058 [2024-11-20 11:46:06.545723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.058 [2024-11-20 11:46:06.545780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.058 [2024-11-20 11:46:06.545795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:01.058 [2024-11-20 11:46:06.545810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.058 [2024-11-20 11:46:06.545822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.058 [2024-11-20 11:46:06.653065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.058 [2024-11-20 11:46:06.653140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:01.058 [2024-11-20 11:46:06.653181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.059 [2024-11-20 11:46:06.653194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.059 [2024-11-20 11:46:06.734845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.059 [2024-11-20 11:46:06.734908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:01.059 [2024-11-20 11:46:06.734947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.059 [2024-11-20 11:46:06.734960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.059 [2024-11-20 11:46:06.735109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.059 [2024-11-20 11:46:06.735129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:01.059 [2024-11-20 11:46:06.735171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.059 [2024-11-20 11:46:06.735187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.059 [2024-11-20 11:46:06.735290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.059 [2024-11-20 11:46:06.735303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:01.059 [2024-11-20 11:46:06.735317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.059 [2024-11-20 11:46:06.735328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.059 [2024-11-20 11:46:06.735494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.059 [2024-11-20 11:46:06.735514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:01.059 [2024-11-20 11:46:06.735529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.059 [2024-11-20 11:46:06.735585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.059 [2024-11-20 11:46:06.735688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.059 [2024-11-20 11:46:06.735707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:01.059 [2024-11-20 11:46:06.735723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.059 [2024-11-20 11:46:06.735734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.059 [2024-11-20 11:46:06.735810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.059 [2024-11-20 11:46:06.735825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:01.059 [2024-11-20 11:46:06.735842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.059 [2024-11-20 11:46:06.735853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.059 [2024-11-20 11:46:06.735995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.059 [2024-11-20 11:46:06.736014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:01.059 [2024-11-20 11:46:06.736030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.059 [2024-11-20 11:46:06.736042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.059 [2024-11-20 11:46:06.736352] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 448.631 ms, result 0 00:32:01.059 true 00:32:01.059 11:46:06 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78313 00:32:01.059 11:46:06 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78313 ']' 00:32:01.059 11:46:06 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78313 00:32:01.059 11:46:06 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:32:01.059 11:46:06 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:01.059 11:46:06 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78313 00:32:01.059 killing process with pid 78313 00:32:01.059 11:46:06 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:01.059 11:46:06 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:01.059 11:46:06 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78313' 00:32:01.059 11:46:06 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78313 00:32:01.059 11:46:06 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78313 00:32:06.383 11:46:11 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:32:06.951 65536+0 records in 00:32:06.951 65536+0 records out 00:32:06.951 268435456 bytes (268 MB, 256 MiB) copied, 1.13178 s, 237 MB/s 00:32:06.951 11:46:12 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:06.951 [2024-11-20 11:46:12.684431] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:32:06.951 [2024-11-20 11:46:12.684637] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78523 ] 00:32:07.210 [2024-11-20 11:46:12.875760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.469 [2024-11-20 11:46:13.018426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:07.728 [2024-11-20 11:46:13.353823] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:07.728 [2024-11-20 11:46:13.353914] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:07.988 [2024-11-20 11:46:13.519672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.988 [2024-11-20 11:46:13.519724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:07.988 [2024-11-20 11:46:13.519761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:07.988 [2024-11-20 11:46:13.519773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.988 [2024-11-20 11:46:13.523094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.988 [2024-11-20 11:46:13.523139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:07.988 [2024-11-20 11:46:13.523172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.294 ms 00:32:07.988 [2024-11-20 11:46:13.523183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.988 [2024-11-20 11:46:13.523316] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:07.988 [2024-11-20 11:46:13.524335] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:07.988 [2024-11-20 11:46:13.524410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.988 [2024-11-20 11:46:13.524441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:07.988 [2024-11-20 11:46:13.524453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.104 ms 00:32:07.988 [2024-11-20 11:46:13.524464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.988 [2024-11-20 11:46:13.526635] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:07.988 [2024-11-20 11:46:13.541997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.988 [2024-11-20 11:46:13.542045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:07.988 [2024-11-20 11:46:13.542079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.363 ms 00:32:07.988 [2024-11-20 11:46:13.542091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.988 [2024-11-20 11:46:13.542201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.988 [2024-11-20 11:46:13.542222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:07.988 [2024-11-20 11:46:13.542235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:32:07.988 [2024-11-20 11:46:13.542246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.988 [2024-11-20 11:46:13.551334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.988 [2024-11-20 11:46:13.551571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:07.988 [2024-11-20 11:46:13.551599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.036 ms 00:32:07.988 [2024-11-20 11:46:13.551613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.988 [2024-11-20 11:46:13.551740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.988 [2024-11-20 11:46:13.551761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:07.988 [2024-11-20 11:46:13.551773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:32:07.988 [2024-11-20 11:46:13.551785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.988 [2024-11-20 11:46:13.551824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.988 [2024-11-20 11:46:13.551852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:07.988 [2024-11-20 11:46:13.551864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:07.988 [2024-11-20 11:46:13.551874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.988 [2024-11-20 11:46:13.551904] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:32:07.988 [2024-11-20 11:46:13.556576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.988 [2024-11-20 11:46:13.556624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:07.988 [2024-11-20 11:46:13.556657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.679 ms 00:32:07.988 [2024-11-20 11:46:13.556668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.988 [2024-11-20 11:46:13.556782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.988 [2024-11-20 11:46:13.556800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:07.988 [2024-11-20 11:46:13.556812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:07.988 [2024-11-20 11:46:13.556823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.988 [2024-11-20 11:46:13.556853] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:07.988 [2024-11-20 11:46:13.556918] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:07.988 [2024-11-20 11:46:13.556959] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:07.988 [2024-11-20 11:46:13.556980] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:07.988 [2024-11-20 11:46:13.557084] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:07.988 [2024-11-20 11:46:13.557111] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:07.988 [2024-11-20 11:46:13.557127] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:07.988 [2024-11-20 11:46:13.557142] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:07.988 [2024-11-20 11:46:13.557162] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:07.988 [2024-11-20 11:46:13.557174] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:32:07.988 [2024-11-20 11:46:13.557185] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:07.988 [2024-11-20 11:46:13.557195] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:07.988 [2024-11-20 11:46:13.557207] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:07.988 [2024-11-20 11:46:13.557268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.988 [2024-11-20 11:46:13.557281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:07.988 [2024-11-20 11:46:13.557293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:32:07.988 [2024-11-20 11:46:13.557305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.988 [2024-11-20 11:46:13.557421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.988 [2024-11-20 11:46:13.557438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:07.988 [2024-11-20 11:46:13.557457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:07.988 [2024-11-20 11:46:13.557468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.988 [2024-11-20 11:46:13.557621] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:07.988 [2024-11-20 11:46:13.557647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:07.988 [2024-11-20 11:46:13.557661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:07.988 [2024-11-20 11:46:13.557673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:07.988 [2024-11-20 11:46:13.557685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:07.988 [2024-11-20 11:46:13.557696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:07.988 [2024-11-20 11:46:13.557721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:32:07.988 [2024-11-20 11:46:13.557731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:07.988 [2024-11-20 11:46:13.557742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:32:07.988 [2024-11-20 11:46:13.557752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:07.988 [2024-11-20 11:46:13.557763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:07.988 [2024-11-20 11:46:13.557773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:32:07.988 [2024-11-20 11:46:13.557783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:07.988 [2024-11-20 11:46:13.557807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:07.988 [2024-11-20 11:46:13.557818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:32:07.988 [2024-11-20 11:46:13.557829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:07.989 [2024-11-20 11:46:13.557841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:07.989 [2024-11-20 11:46:13.557852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:32:07.989 [2024-11-20 11:46:13.557878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:07.989 [2024-11-20 11:46:13.557889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:07.989 [2024-11-20 11:46:13.557900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:32:07.989 [2024-11-20 11:46:13.557910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:07.989 [2024-11-20 11:46:13.557920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:07.989 [2024-11-20 11:46:13.557930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:32:07.989 [2024-11-20 11:46:13.557940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:07.989 [2024-11-20 11:46:13.557950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:07.989 [2024-11-20 11:46:13.557961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:32:07.989 [2024-11-20 11:46:13.557971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:07.989 [2024-11-20 11:46:13.557980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:07.989 [2024-11-20 11:46:13.557991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:32:07.989 [2024-11-20 11:46:13.558001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:07.989 [2024-11-20 11:46:13.558011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:07.989 [2024-11-20 11:46:13.558022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:32:07.989 [2024-11-20 11:46:13.558032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:07.989 [2024-11-20 11:46:13.558042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:07.989 [2024-11-20 11:46:13.558053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:32:07.989 [2024-11-20 11:46:13.558063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:07.989 [2024-11-20 11:46:13.558074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:07.989 [2024-11-20 11:46:13.558085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:32:07.989 [2024-11-20 11:46:13.558095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:07.989 [2024-11-20 11:46:13.558106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:07.989 [2024-11-20 11:46:13.558116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:32:07.989 [2024-11-20 11:46:13.558126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:07.989 [2024-11-20 11:46:13.558136] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:07.989 [2024-11-20 11:46:13.558147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:07.989 [2024-11-20 11:46:13.558158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:07.989 [2024-11-20 11:46:13.558175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:07.989 [2024-11-20 11:46:13.558187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:07.989 [2024-11-20 11:46:13.558199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:07.989 [2024-11-20 11:46:13.558224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:07.989 [2024-11-20 11:46:13.558235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:07.989 [2024-11-20 11:46:13.558245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:07.989 [2024-11-20 11:46:13.558255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:07.989 [2024-11-20 11:46:13.558268] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:07.989 [2024-11-20 11:46:13.558281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:07.989 [2024-11-20 11:46:13.558293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:32:07.989 [2024-11-20 11:46:13.558305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:32:07.989 [2024-11-20 11:46:13.558332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:32:07.989 [2024-11-20 11:46:13.558343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:32:07.989 [2024-11-20 11:46:13.558354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:32:07.989 [2024-11-20 11:46:13.558365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:32:07.989 [2024-11-20 11:46:13.558376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:32:07.989 [2024-11-20 11:46:13.558387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:32:07.989 [2024-11-20 11:46:13.558398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:32:07.989 [2024-11-20 11:46:13.558409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:32:07.989 [2024-11-20 11:46:13.558419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:32:07.989 [2024-11-20 11:46:13.558431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:32:07.989 [2024-11-20 11:46:13.558442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:32:07.989 [2024-11-20 11:46:13.558454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:32:07.989 [2024-11-20 11:46:13.558464] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:07.989 [2024-11-20 11:46:13.558476] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:07.989 [2024-11-20 11:46:13.558489] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:07.989 [2024-11-20 11:46:13.558500] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:07.989 [2024-11-20 11:46:13.558512] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:07.989 [2024-11-20 11:46:13.558523] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:07.989 [2024-11-20 11:46:13.558535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.989 [2024-11-20 11:46:13.558546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:07.989 [2024-11-20 11:46:13.558563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.012 ms 00:32:07.989 [2024-11-20 11:46:13.558573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.989 [2024-11-20 11:46:13.596323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.989 [2024-11-20 11:46:13.596666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:07.989 [2024-11-20 11:46:13.596811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.357 ms 00:32:07.989 [2024-11-20 11:46:13.596952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.989 [2024-11-20 11:46:13.597191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.989 [2024-11-20 11:46:13.597308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:07.989 [2024-11-20 11:46:13.597450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:32:07.989 [2024-11-20 11:46:13.597637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.989 [2024-11-20 11:46:13.647904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.989 [2024-11-20 11:46:13.648126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:07.989 [2024-11-20 11:46:13.648242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.199 ms 00:32:07.989 [2024-11-20 11:46:13.648301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.989 [2024-11-20 11:46:13.648599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.989 [2024-11-20 11:46:13.648771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:07.989 [2024-11-20 11:46:13.648886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:07.989 [2024-11-20 11:46:13.649023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.989 [2024-11-20 11:46:13.649759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.989 [2024-11-20 11:46:13.649899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:07.989 [2024-11-20 11:46:13.650006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.654 ms 00:32:07.989 [2024-11-20 11:46:13.650154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.989 [2024-11-20 11:46:13.650383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.989 [2024-11-20 11:46:13.650414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:07.989 [2024-11-20 11:46:13.650430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:32:07.989 [2024-11-20 11:46:13.650442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.989 [2024-11-20 11:46:13.669054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.989 [2024-11-20 11:46:13.669097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:07.989 [2024-11-20 11:46:13.669132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.579 ms 00:32:07.989 [2024-11-20 11:46:13.669144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.990 [2024-11-20 11:46:13.684417] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:32:07.990 [2024-11-20 11:46:13.684652] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:07.990 [2024-11-20 11:46:13.684679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.990 [2024-11-20 11:46:13.684691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:07.990 [2024-11-20 11:46:13.684705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.375 ms 00:32:07.990 [2024-11-20 11:46:13.684717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.990 [2024-11-20 11:46:13.710662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.990 [2024-11-20 11:46:13.710706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:07.990 [2024-11-20 11:46:13.710751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.849 ms 00:32:07.990 [2024-11-20 11:46:13.710763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.990 [2024-11-20 11:46:13.724488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.990 [2024-11-20 11:46:13.724531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:07.990 [2024-11-20 11:46:13.724590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.636 ms 00:32:07.990 [2024-11-20 11:46:13.724626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.990 [2024-11-20 11:46:13.738311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.990 [2024-11-20 11:46:13.738352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:07.990 [2024-11-20 11:46:13.738384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.588 ms 00:32:07.990 [2024-11-20 11:46:13.738395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.990 [2024-11-20 11:46:13.739214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.990 [2024-11-20 11:46:13.739247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:07.990 [2024-11-20 11:46:13.739279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.702 ms 00:32:07.990 [2024-11-20 11:46:13.739290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.249 [2024-11-20 11:46:13.809064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:08.249 [2024-11-20 11:46:13.809140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:08.249 [2024-11-20 11:46:13.809178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.722 ms 00:32:08.249 [2024-11-20 11:46:13.809190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.249 [2024-11-20 11:46:13.820163] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:32:08.249 [2024-11-20 11:46:13.840386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:08.249 [2024-11-20 11:46:13.840456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:08.249 [2024-11-20 11:46:13.840479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.028 ms 00:32:08.249 [2024-11-20 11:46:13.840492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.249 [2024-11-20 11:46:13.840692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:08.249 [2024-11-20 11:46:13.840722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:08.249 [2024-11-20 11:46:13.840739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:32:08.249 [2024-11-20 11:46:13.840752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.249 [2024-11-20 11:46:13.840833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:08.249 [2024-11-20 11:46:13.840850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:08.249 [2024-11-20 11:46:13.840864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:32:08.249 [2024-11-20 11:46:13.840877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.249 [2024-11-20 11:46:13.840918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:08.249 [2024-11-20 11:46:13.840934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:08.249 [2024-11-20 11:46:13.840950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:08.249 [2024-11-20 11:46:13.840963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.249 [2024-11-20 11:46:13.841011] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:08.249 [2024-11-20 11:46:13.841029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:08.249 [2024-11-20 11:46:13.841041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:08.249 [2024-11-20 11:46:13.841053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:32:08.249 [2024-11-20 11:46:13.841066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.249 [2024-11-20 11:46:13.871430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:08.249 [2024-11-20 11:46:13.871482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:08.249 [2024-11-20 11:46:13.871516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.335 ms 00:32:08.249 [2024-11-20 11:46:13.871529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.249 [2024-11-20 11:46:13.871758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:08.249 [2024-11-20 11:46:13.871781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:08.249 [2024-11-20 11:46:13.871795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:32:08.249 [2024-11-20 11:46:13.871812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.249 [2024-11-20 11:46:13.873166] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:08.249 [2024-11-20 11:46:13.877396] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 353.066 ms, result 0 00:32:08.249 [2024-11-20 11:46:13.878443] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:08.249 [2024-11-20 11:46:13.893606] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:09.185  [2024-11-20T11:46:16.327Z] Copying: 21/256 [MB] (21 MBps) [2024-11-20T11:46:17.263Z] Copying: 44/256 [MB] (22 MBps) [2024-11-20T11:46:18.200Z] Copying: 67/256 [MB] (23 MBps) [2024-11-20T11:46:19.136Z] Copying: 90/256 [MB] (23 MBps) [2024-11-20T11:46:20.070Z] Copying: 113/256 [MB] (22 MBps) [2024-11-20T11:46:21.006Z] Copying: 136/256 [MB] (22 MBps) [2024-11-20T11:46:21.942Z] Copying: 158/256 [MB] (22 MBps) [2024-11-20T11:46:23.318Z] Copying: 181/256 [MB] (22 MBps) [2024-11-20T11:46:24.254Z] Copying: 204/256 [MB] (22 MBps) [2024-11-20T11:46:25.190Z] Copying: 227/256 [MB] (23 MBps) [2024-11-20T11:46:25.190Z] Copying: 251/256 [MB] (23 MBps) [2024-11-20T11:46:25.190Z] Copying: 256/256 [MB] (average 22 MBps)[2024-11-20 11:46:25.104360] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:19.424 [2024-11-20 11:46:25.116181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.424 [2024-11-20 11:46:25.116247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:19.424 [2024-11-20 11:46:25.116267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:19.424 [2024-11-20 11:46:25.116279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.424 [2024-11-20 11:46:25.116307] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:32:19.424 [2024-11-20 11:46:25.119675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.424 [2024-11-20 11:46:25.119727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:19.424 [2024-11-20 11:46:25.119740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.349 ms 00:32:19.424 [2024-11-20 11:46:25.119751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.424 [2024-11-20 11:46:25.121680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.424 [2024-11-20 11:46:25.121765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:19.424 [2024-11-20 11:46:25.121780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.900 ms 00:32:19.424 [2024-11-20 11:46:25.121790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.424 [2024-11-20 11:46:25.128407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.424 [2024-11-20 11:46:25.128462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:19.424 [2024-11-20 11:46:25.128485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.594 ms 00:32:19.424 [2024-11-20 11:46:25.128496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.424 [2024-11-20 11:46:25.134972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.424 [2024-11-20 11:46:25.135020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:19.424 [2024-11-20 11:46:25.135034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.405 ms 00:32:19.424 [2024-11-20 11:46:25.135045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.424 [2024-11-20 11:46:25.162524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.424 [2024-11-20 11:46:25.162586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:19.424 [2024-11-20 11:46:25.162603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.430 ms 00:32:19.424 [2024-11-20 11:46:25.162613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.424 [2024-11-20 11:46:25.178933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.424 [2024-11-20 11:46:25.178986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:19.424 [2024-11-20 11:46:25.179010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.242 ms 00:32:19.424 [2024-11-20 11:46:25.179025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.424 [2024-11-20 11:46:25.179170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.425 [2024-11-20 11:46:25.179190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:19.425 [2024-11-20 11:46:25.179202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:32:19.425 [2024-11-20 11:46:25.179213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.683 [2024-11-20 11:46:25.206709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.683 [2024-11-20 11:46:25.206763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:19.683 [2024-11-20 11:46:25.206778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.474 ms 00:32:19.683 [2024-11-20 11:46:25.206788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.683 [2024-11-20 11:46:25.233763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.683 [2024-11-20 11:46:25.233816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:19.683 [2024-11-20 11:46:25.233832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.913 ms 00:32:19.683 [2024-11-20 11:46:25.233842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.683 [2024-11-20 11:46:25.260856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.683 [2024-11-20 11:46:25.260911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:19.683 [2024-11-20 11:46:25.260927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.937 ms 00:32:19.683 [2024-11-20 11:46:25.260938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.683 [2024-11-20 11:46:25.287866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.683 [2024-11-20 11:46:25.287919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:19.683 [2024-11-20 11:46:25.287934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.823 ms 00:32:19.683 [2024-11-20 11:46:25.287944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.683 [2024-11-20 11:46:25.288004] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:19.683 [2024-11-20 11:46:25.288034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:19.683 [2024-11-20 11:46:25.288048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:19.683 [2024-11-20 11:46:25.288059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:19.683 [2024-11-20 11:46:25.288070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:19.683 [2024-11-20 11:46:25.288081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:19.683 [2024-11-20 11:46:25.288092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:19.683 [2024-11-20 11:46:25.288103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:19.683 [2024-11-20 11:46:25.288114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:19.683 [2024-11-20 11:46:25.288124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:19.683 [2024-11-20 11:46:25.288143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:19.683 [2024-11-20 11:46:25.288154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:19.683 [2024-11-20 11:46:25.288165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.288989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.289001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.289012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.289023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.289035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.289046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.289056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.289067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.289078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.289089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.289100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.289111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.289122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.289132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.289143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.289155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.289178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.289191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.289202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.289213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:19.684 [2024-11-20 11:46:25.289253] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:19.684 [2024-11-20 11:46:25.289265] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e3116d21-5f36-46d4-8ab1-bab032ddcd4c 00:32:19.685 [2024-11-20 11:46:25.289277] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:19.685 [2024-11-20 11:46:25.289287] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:19.685 [2024-11-20 11:46:25.289297] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:19.685 [2024-11-20 11:46:25.289308] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:19.685 [2024-11-20 11:46:25.289335] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:19.685 [2024-11-20 11:46:25.289346] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:19.685 [2024-11-20 11:46:25.289356] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:19.685 [2024-11-20 11:46:25.289371] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:19.685 [2024-11-20 11:46:25.289381] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:19.685 [2024-11-20 11:46:25.289391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.685 [2024-11-20 11:46:25.289402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:19.685 [2024-11-20 11:46:25.289419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.389 ms 00:32:19.685 [2024-11-20 11:46:25.289430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.685 [2024-11-20 11:46:25.305088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.685 [2024-11-20 11:46:25.305136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:19.685 [2024-11-20 11:46:25.305151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.632 ms 00:32:19.685 [2024-11-20 11:46:25.305163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.685 [2024-11-20 11:46:25.305706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.685 [2024-11-20 11:46:25.305742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:19.685 [2024-11-20 11:46:25.305756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.500 ms 00:32:19.685 [2024-11-20 11:46:25.305767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.685 [2024-11-20 11:46:25.351757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:19.685 [2024-11-20 11:46:25.351819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:19.685 [2024-11-20 11:46:25.351836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:19.685 [2024-11-20 11:46:25.351849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.685 [2024-11-20 11:46:25.351970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:19.685 [2024-11-20 11:46:25.351991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:19.685 [2024-11-20 11:46:25.352020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:19.685 [2024-11-20 11:46:25.352030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.685 [2024-11-20 11:46:25.352092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:19.685 [2024-11-20 11:46:25.352109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:19.685 [2024-11-20 11:46:25.352121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:19.685 [2024-11-20 11:46:25.352132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.685 [2024-11-20 11:46:25.352157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:19.685 [2024-11-20 11:46:25.352169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:19.685 [2024-11-20 11:46:25.352187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:19.685 [2024-11-20 11:46:25.352198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.943 [2024-11-20 11:46:25.449143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:19.943 [2024-11-20 11:46:25.449217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:19.943 [2024-11-20 11:46:25.449260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:19.943 [2024-11-20 11:46:25.449273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.943 [2024-11-20 11:46:25.529735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:19.943 [2024-11-20 11:46:25.529813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:19.943 [2024-11-20 11:46:25.529838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:19.943 [2024-11-20 11:46:25.529850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.943 [2024-11-20 11:46:25.529932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:19.943 [2024-11-20 11:46:25.529950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:19.943 [2024-11-20 11:46:25.529962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:19.943 [2024-11-20 11:46:25.529973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.943 [2024-11-20 11:46:25.530007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:19.943 [2024-11-20 11:46:25.530020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:19.943 [2024-11-20 11:46:25.530031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:19.943 [2024-11-20 11:46:25.530049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.943 [2024-11-20 11:46:25.530169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:19.943 [2024-11-20 11:46:25.530188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:19.943 [2024-11-20 11:46:25.530200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:19.943 [2024-11-20 11:46:25.530210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.943 [2024-11-20 11:46:25.530259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:19.943 [2024-11-20 11:46:25.530276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:19.943 [2024-11-20 11:46:25.530287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:19.943 [2024-11-20 11:46:25.530298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.943 [2024-11-20 11:46:25.530351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:19.943 [2024-11-20 11:46:25.530366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:19.943 [2024-11-20 11:46:25.530378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:19.943 [2024-11-20 11:46:25.530389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.943 [2024-11-20 11:46:25.530441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:19.943 [2024-11-20 11:46:25.530457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:19.943 [2024-11-20 11:46:25.530468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:19.943 [2024-11-20 11:46:25.530484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.944 [2024-11-20 11:46:25.530712] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 414.515 ms, result 0 00:32:20.878 00:32:20.878 00:32:20.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:20.878 11:46:26 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78663 00:32:20.878 11:46:26 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:32:20.878 11:46:26 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78663 00:32:20.879 11:46:26 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78663 ']' 00:32:20.879 11:46:26 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:20.879 11:46:26 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:20.879 11:46:26 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:20.879 11:46:26 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:20.879 11:46:26 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:32:21.137 [2024-11-20 11:46:26.708229] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:32:21.137 [2024-11-20 11:46:26.708451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78663 ] 00:32:21.137 [2024-11-20 11:46:26.892628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:21.395 [2024-11-20 11:46:27.010444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:22.332 11:46:27 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:22.332 11:46:27 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:32:22.332 11:46:27 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:32:22.332 [2024-11-20 11:46:28.083371] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:22.332 [2024-11-20 11:46:28.083480] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:22.592 [2024-11-20 11:46:28.240387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.592 [2024-11-20 11:46:28.240460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:22.592 [2024-11-20 11:46:28.240499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:22.592 [2024-11-20 11:46:28.240512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.592 [2024-11-20 11:46:28.244035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.592 [2024-11-20 11:46:28.244097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:22.592 [2024-11-20 11:46:28.244132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.495 ms 00:32:22.592 [2024-11-20 11:46:28.244144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.592 [2024-11-20 11:46:28.244300] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:22.592 [2024-11-20 11:46:28.245161] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:22.592 [2024-11-20 11:46:28.245219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.592 [2024-11-20 11:46:28.245264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:22.592 [2024-11-20 11:46:28.245280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.918 ms 00:32:22.592 [2024-11-20 11:46:28.245292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.592 [2024-11-20 11:46:28.247507] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:22.592 [2024-11-20 11:46:28.262635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.592 [2024-11-20 11:46:28.262758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:22.592 [2024-11-20 11:46:28.262781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.134 ms 00:32:22.592 [2024-11-20 11:46:28.262796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.592 [2024-11-20 11:46:28.262915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.592 [2024-11-20 11:46:28.262940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:22.592 [2024-11-20 11:46:28.262970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:32:22.592 [2024-11-20 11:46:28.262984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.592 [2024-11-20 11:46:28.271782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.592 [2024-11-20 11:46:28.271867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:22.592 [2024-11-20 11:46:28.271883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.733 ms 00:32:22.592 [2024-11-20 11:46:28.271908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.592 [2024-11-20 11:46:28.272091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.592 [2024-11-20 11:46:28.272121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:22.592 [2024-11-20 11:46:28.272136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:32:22.592 [2024-11-20 11:46:28.272153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.592 [2024-11-20 11:46:28.272209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.592 [2024-11-20 11:46:28.272233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:22.592 [2024-11-20 11:46:28.272246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:32:22.592 [2024-11-20 11:46:28.272260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.592 [2024-11-20 11:46:28.272294] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:32:22.592 [2024-11-20 11:46:28.277007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.592 [2024-11-20 11:46:28.277062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:22.592 [2024-11-20 11:46:28.277095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.718 ms 00:32:22.592 [2024-11-20 11:46:28.277107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.592 [2024-11-20 11:46:28.277199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.592 [2024-11-20 11:46:28.277218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:22.592 [2024-11-20 11:46:28.277245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:22.592 [2024-11-20 11:46:28.277259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.592 [2024-11-20 11:46:28.277292] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:22.592 [2024-11-20 11:46:28.277334] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:22.592 [2024-11-20 11:46:28.277385] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:22.592 [2024-11-20 11:46:28.277408] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:22.592 [2024-11-20 11:46:28.277514] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:22.592 [2024-11-20 11:46:28.277529] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:22.592 [2024-11-20 11:46:28.277576] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:22.592 [2024-11-20 11:46:28.277617] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:22.592 [2024-11-20 11:46:28.277634] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:22.592 [2024-11-20 11:46:28.277647] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:32:22.592 [2024-11-20 11:46:28.277661] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:22.592 [2024-11-20 11:46:28.277673] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:22.592 [2024-11-20 11:46:28.277689] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:22.592 [2024-11-20 11:46:28.277702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.592 [2024-11-20 11:46:28.277716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:22.592 [2024-11-20 11:46:28.277728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:32:22.592 [2024-11-20 11:46:28.277741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.592 [2024-11-20 11:46:28.277838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.593 [2024-11-20 11:46:28.277857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:22.593 [2024-11-20 11:46:28.277870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:32:22.593 [2024-11-20 11:46:28.277883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.593 [2024-11-20 11:46:28.278005] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:22.593 [2024-11-20 11:46:28.278025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:22.593 [2024-11-20 11:46:28.278037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:22.593 [2024-11-20 11:46:28.278051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:22.593 [2024-11-20 11:46:28.278063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:22.593 [2024-11-20 11:46:28.278076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:22.593 [2024-11-20 11:46:28.278086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:32:22.593 [2024-11-20 11:46:28.278103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:22.593 [2024-11-20 11:46:28.278114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:32:22.593 [2024-11-20 11:46:28.278138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:22.593 [2024-11-20 11:46:28.278149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:22.593 [2024-11-20 11:46:28.278161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:32:22.593 [2024-11-20 11:46:28.278172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:22.593 [2024-11-20 11:46:28.278185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:22.593 [2024-11-20 11:46:28.278195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:32:22.593 [2024-11-20 11:46:28.278208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:22.593 [2024-11-20 11:46:28.278218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:22.593 [2024-11-20 11:46:28.278233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:32:22.593 [2024-11-20 11:46:28.278243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:22.593 [2024-11-20 11:46:28.278257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:22.593 [2024-11-20 11:46:28.278279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:32:22.593 [2024-11-20 11:46:28.278293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:22.593 [2024-11-20 11:46:28.278304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:22.593 [2024-11-20 11:46:28.278320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:32:22.593 [2024-11-20 11:46:28.278347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:22.593 [2024-11-20 11:46:28.278361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:22.593 [2024-11-20 11:46:28.278372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:32:22.593 [2024-11-20 11:46:28.278385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:22.593 [2024-11-20 11:46:28.278395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:22.593 [2024-11-20 11:46:28.278409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:32:22.593 [2024-11-20 11:46:28.278420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:22.593 [2024-11-20 11:46:28.278433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:22.593 [2024-11-20 11:46:28.278444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:32:22.593 [2024-11-20 11:46:28.278458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:22.593 [2024-11-20 11:46:28.278469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:22.593 [2024-11-20 11:46:28.278486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:32:22.593 [2024-11-20 11:46:28.278506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:22.593 [2024-11-20 11:46:28.278520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:22.593 [2024-11-20 11:46:28.278531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:32:22.593 [2024-11-20 11:46:28.278546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:22.593 [2024-11-20 11:46:28.278556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:22.593 [2024-11-20 11:46:28.278570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:32:22.593 [2024-11-20 11:46:28.278597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:22.593 [2024-11-20 11:46:28.278614] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:22.593 [2024-11-20 11:46:28.278626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:22.593 [2024-11-20 11:46:28.278644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:22.593 [2024-11-20 11:46:28.278656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:22.593 [2024-11-20 11:46:28.278678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:22.593 [2024-11-20 11:46:28.278705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:22.593 [2024-11-20 11:46:28.278718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:22.593 [2024-11-20 11:46:28.278730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:22.593 [2024-11-20 11:46:28.278743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:22.593 [2024-11-20 11:46:28.278754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:22.593 [2024-11-20 11:46:28.278768] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:22.593 [2024-11-20 11:46:28.278782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:22.593 [2024-11-20 11:46:28.278799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:32:22.593 [2024-11-20 11:46:28.278811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:32:22.593 [2024-11-20 11:46:28.278826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:32:22.593 [2024-11-20 11:46:28.278837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:32:22.593 [2024-11-20 11:46:28.278851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:32:22.593 [2024-11-20 11:46:28.278862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:32:22.593 [2024-11-20 11:46:28.278875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:32:22.593 [2024-11-20 11:46:28.278886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:32:22.593 [2024-11-20 11:46:28.278899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:32:22.593 [2024-11-20 11:46:28.278910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:32:22.593 [2024-11-20 11:46:28.278923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:32:22.593 [2024-11-20 11:46:28.278935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:32:22.593 [2024-11-20 11:46:28.278948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:32:22.593 [2024-11-20 11:46:28.278959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:32:22.593 [2024-11-20 11:46:28.278973] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:22.593 [2024-11-20 11:46:28.278986] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:22.593 [2024-11-20 11:46:28.279002] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:22.593 [2024-11-20 11:46:28.279014] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:22.593 [2024-11-20 11:46:28.279028] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:22.593 [2024-11-20 11:46:28.279039] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:22.593 [2024-11-20 11:46:28.279053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.593 [2024-11-20 11:46:28.279065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:22.593 [2024-11-20 11:46:28.279079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.123 ms 00:32:22.593 [2024-11-20 11:46:28.279090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.593 [2024-11-20 11:46:28.316707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.593 [2024-11-20 11:46:28.316784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:22.593 [2024-11-20 11:46:28.316822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.516 ms 00:32:22.593 [2024-11-20 11:46:28.316835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.593 [2024-11-20 11:46:28.317012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.593 [2024-11-20 11:46:28.317038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:22.593 [2024-11-20 11:46:28.317068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:32:22.593 [2024-11-20 11:46:28.317080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.853 [2024-11-20 11:46:28.358461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.853 [2024-11-20 11:46:28.358559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:22.853 [2024-11-20 11:46:28.358595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.342 ms 00:32:22.853 [2024-11-20 11:46:28.358609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.853 [2024-11-20 11:46:28.358754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.853 [2024-11-20 11:46:28.358774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:22.853 [2024-11-20 11:46:28.358809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:22.853 [2024-11-20 11:46:28.358822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.853 [2024-11-20 11:46:28.359434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.853 [2024-11-20 11:46:28.359464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:22.853 [2024-11-20 11:46:28.359493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:32:22.853 [2024-11-20 11:46:28.359507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.853 [2024-11-20 11:46:28.359713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.853 [2024-11-20 11:46:28.359733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:22.853 [2024-11-20 11:46:28.359752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:32:22.853 [2024-11-20 11:46:28.359765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.853 [2024-11-20 11:46:28.380649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.853 [2024-11-20 11:46:28.380713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:22.853 [2024-11-20 11:46:28.380753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.839 ms 00:32:22.853 [2024-11-20 11:46:28.380766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.853 [2024-11-20 11:46:28.395832] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:32:22.853 [2024-11-20 11:46:28.395894] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:22.853 [2024-11-20 11:46:28.395932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.853 [2024-11-20 11:46:28.395944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:22.853 [2024-11-20 11:46:28.395960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.021 ms 00:32:22.853 [2024-11-20 11:46:28.395970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.853 [2024-11-20 11:46:28.421602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.853 [2024-11-20 11:46:28.421678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:22.853 [2024-11-20 11:46:28.421715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.540 ms 00:32:22.853 [2024-11-20 11:46:28.421728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.853 [2024-11-20 11:46:28.435221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.853 [2024-11-20 11:46:28.435279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:22.853 [2024-11-20 11:46:28.435316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.396 ms 00:32:22.853 [2024-11-20 11:46:28.435327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.853 [2024-11-20 11:46:28.448720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.853 [2024-11-20 11:46:28.448778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:22.853 [2024-11-20 11:46:28.448813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.302 ms 00:32:22.853 [2024-11-20 11:46:28.448825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.853 [2024-11-20 11:46:28.449654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.853 [2024-11-20 11:46:28.449691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:22.853 [2024-11-20 11:46:28.449740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.701 ms 00:32:22.853 [2024-11-20 11:46:28.449751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.853 [2024-11-20 11:46:28.528429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.853 [2024-11-20 11:46:28.528558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:22.853 [2024-11-20 11:46:28.528600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.638 ms 00:32:22.853 [2024-11-20 11:46:28.528613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.853 [2024-11-20 11:46:28.539699] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:32:22.853 [2024-11-20 11:46:28.559337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.853 [2024-11-20 11:46:28.559438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:22.853 [2024-11-20 11:46:28.559463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.589 ms 00:32:22.853 [2024-11-20 11:46:28.559478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.853 [2024-11-20 11:46:28.559689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.853 [2024-11-20 11:46:28.559715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:22.853 [2024-11-20 11:46:28.559729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:32:22.853 [2024-11-20 11:46:28.559744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.853 [2024-11-20 11:46:28.559833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.853 [2024-11-20 11:46:28.559853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:22.853 [2024-11-20 11:46:28.559866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:32:22.853 [2024-11-20 11:46:28.559880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.853 [2024-11-20 11:46:28.559917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.853 [2024-11-20 11:46:28.559934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:22.853 [2024-11-20 11:46:28.559946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:22.853 [2024-11-20 11:46:28.559970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.853 [2024-11-20 11:46:28.560040] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:22.853 [2024-11-20 11:46:28.560062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.853 [2024-11-20 11:46:28.560074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:22.853 [2024-11-20 11:46:28.560092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:32:22.853 [2024-11-20 11:46:28.560103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.853 [2024-11-20 11:46:28.588083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.853 [2024-11-20 11:46:28.588150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:22.853 [2024-11-20 11:46:28.588191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.930 ms 00:32:22.853 [2024-11-20 11:46:28.588205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.854 [2024-11-20 11:46:28.588344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.854 [2024-11-20 11:46:28.588365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:22.854 [2024-11-20 11:46:28.588401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:32:22.854 [2024-11-20 11:46:28.588420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.854 [2024-11-20 11:46:28.589873] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:22.854 [2024-11-20 11:46:28.593711] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 348.973 ms, result 0 00:32:22.854 [2024-11-20 11:46:28.595327] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:22.854 Some configs were skipped because the RPC state that can call them passed over. 00:32:23.112 11:46:28 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:32:23.112 [2024-11-20 11:46:28.859889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.112 [2024-11-20 11:46:28.859989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:32:23.112 [2024-11-20 11:46:28.860042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.692 ms 00:32:23.112 [2024-11-20 11:46:28.860057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.112 [2024-11-20 11:46:28.860134] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.939 ms, result 0 00:32:23.112 true 00:32:23.371 11:46:28 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:32:23.371 [2024-11-20 11:46:29.099847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.371 [2024-11-20 11:46:29.099909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:32:23.371 [2024-11-20 11:46:29.099933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.315 ms 00:32:23.371 [2024-11-20 11:46:29.099947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.371 [2024-11-20 11:46:29.100001] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.476 ms, result 0 00:32:23.371 true 00:32:23.371 11:46:29 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78663 00:32:23.371 11:46:29 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78663 ']' 00:32:23.371 11:46:29 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78663 00:32:23.371 11:46:29 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:32:23.371 11:46:29 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:23.371 11:46:29 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78663 00:32:23.631 killing process with pid 78663 00:32:23.631 11:46:29 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:23.631 11:46:29 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:23.631 11:46:29 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78663' 00:32:23.631 11:46:29 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78663 00:32:23.631 11:46:29 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78663 00:32:24.573 [2024-11-20 11:46:30.060016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.573 [2024-11-20 11:46:30.060116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:24.573 [2024-11-20 11:46:30.060152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:24.573 [2024-11-20 11:46:30.060166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.573 [2024-11-20 11:46:30.060198] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:32:24.573 [2024-11-20 11:46:30.063585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.573 [2024-11-20 11:46:30.063643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:24.573 [2024-11-20 11:46:30.063695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.360 ms 00:32:24.573 [2024-11-20 11:46:30.063707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.573 [2024-11-20 11:46:30.064025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.573 [2024-11-20 11:46:30.064045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:24.573 [2024-11-20 11:46:30.064059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:32:24.573 [2024-11-20 11:46:30.064070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.573 [2024-11-20 11:46:30.067741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.573 [2024-11-20 11:46:30.067805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:24.573 [2024-11-20 11:46:30.067844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.642 ms 00:32:24.573 [2024-11-20 11:46:30.067858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.573 [2024-11-20 11:46:30.074256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.573 [2024-11-20 11:46:30.074323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:24.573 [2024-11-20 11:46:30.074375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.329 ms 00:32:24.573 [2024-11-20 11:46:30.074387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.573 [2024-11-20 11:46:30.085536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.573 [2024-11-20 11:46:30.085619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:24.573 [2024-11-20 11:46:30.085656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.083 ms 00:32:24.573 [2024-11-20 11:46:30.085678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.573 [2024-11-20 11:46:30.094281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.573 [2024-11-20 11:46:30.094355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:24.573 [2024-11-20 11:46:30.094392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.542 ms 00:32:24.573 [2024-11-20 11:46:30.094404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.573 [2024-11-20 11:46:30.094565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.573 [2024-11-20 11:46:30.094602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:24.573 [2024-11-20 11:46:30.094617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:32:24.573 [2024-11-20 11:46:30.094629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.573 [2024-11-20 11:46:30.106204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.573 [2024-11-20 11:46:30.106259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:24.573 [2024-11-20 11:46:30.106292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.547 ms 00:32:24.573 [2024-11-20 11:46:30.106303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.573 [2024-11-20 11:46:30.117842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.573 [2024-11-20 11:46:30.117897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:24.573 [2024-11-20 11:46:30.117934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.477 ms 00:32:24.573 [2024-11-20 11:46:30.117945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.573 [2024-11-20 11:46:30.128768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.573 [2024-11-20 11:46:30.128824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:24.573 [2024-11-20 11:46:30.128860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.776 ms 00:32:24.573 [2024-11-20 11:46:30.128872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.573 [2024-11-20 11:46:30.139580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.573 [2024-11-20 11:46:30.139634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:24.573 [2024-11-20 11:46:30.139667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.634 ms 00:32:24.573 [2024-11-20 11:46:30.139678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.573 [2024-11-20 11:46:30.139723] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:24.573 [2024-11-20 11:46:30.139746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:24.573 [2024-11-20 11:46:30.139763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:24.573 [2024-11-20 11:46:30.139776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:24.573 [2024-11-20 11:46:30.139789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:24.573 [2024-11-20 11:46:30.139801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:24.573 [2024-11-20 11:46:30.139817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:24.573 [2024-11-20 11:46:30.139829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:24.573 [2024-11-20 11:46:30.139843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:24.573 [2024-11-20 11:46:30.139855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:24.573 [2024-11-20 11:46:30.139884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:24.573 [2024-11-20 11:46:30.139896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:24.573 [2024-11-20 11:46:30.139910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:24.573 [2024-11-20 11:46:30.139922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:24.573 [2024-11-20 11:46:30.139936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:24.573 [2024-11-20 11:46:30.139948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:24.573 [2024-11-20 11:46:30.139962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:24.573 [2024-11-20 11:46:30.139974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:24.573 [2024-11-20 11:46:30.139988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:24.573 [2024-11-20 11:46:30.140000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:24.573 [2024-11-20 11:46:30.140017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:24.573 [2024-11-20 11:46:30.140029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:24.573 [2024-11-20 11:46:30.140060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:24.573 [2024-11-20 11:46:30.140074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:24.573 [2024-11-20 11:46:30.140092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:24.573 [2024-11-20 11:46:30.140105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:24.573 [2024-11-20 11:46:30.140122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:24.573 [2024-11-20 11:46:30.140135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.140991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.141006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.141018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.141032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.141044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.141059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.141071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.141085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.141097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.141117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.141129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.141144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.141157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.141173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:24.574 [2024-11-20 11:46:30.141194] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:24.574 [2024-11-20 11:46:30.141215] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e3116d21-5f36-46d4-8ab1-bab032ddcd4c 00:32:24.575 [2024-11-20 11:46:30.141267] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:24.575 [2024-11-20 11:46:30.141287] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:24.575 [2024-11-20 11:46:30.141299] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:24.575 [2024-11-20 11:46:30.141313] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:24.575 [2024-11-20 11:46:30.141325] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:24.575 [2024-11-20 11:46:30.141339] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:24.575 [2024-11-20 11:46:30.141351] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:24.575 [2024-11-20 11:46:30.141370] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:24.575 [2024-11-20 11:46:30.141382] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:24.575 [2024-11-20 11:46:30.141401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.575 [2024-11-20 11:46:30.141414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:24.575 [2024-11-20 11:46:30.141433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.677 ms 00:32:24.575 [2024-11-20 11:46:30.141446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.575 [2024-11-20 11:46:30.158138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.575 [2024-11-20 11:46:30.158198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:24.575 [2024-11-20 11:46:30.158241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.629 ms 00:32:24.575 [2024-11-20 11:46:30.158254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.575 [2024-11-20 11:46:30.158849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.575 [2024-11-20 11:46:30.158881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:24.575 [2024-11-20 11:46:30.158904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.472 ms 00:32:24.575 [2024-11-20 11:46:30.158924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.575 [2024-11-20 11:46:30.218278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.575 [2024-11-20 11:46:30.218370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:24.575 [2024-11-20 11:46:30.218410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.575 [2024-11-20 11:46:30.218425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.575 [2024-11-20 11:46:30.218584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.575 [2024-11-20 11:46:30.218605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:24.575 [2024-11-20 11:46:30.218626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.575 [2024-11-20 11:46:30.218645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.575 [2024-11-20 11:46:30.218753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.575 [2024-11-20 11:46:30.218773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:24.575 [2024-11-20 11:46:30.218797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.575 [2024-11-20 11:46:30.218810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.575 [2024-11-20 11:46:30.218839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.575 [2024-11-20 11:46:30.218852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:24.575 [2024-11-20 11:46:30.218866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.575 [2024-11-20 11:46:30.218877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.575 [2024-11-20 11:46:30.320172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.575 [2024-11-20 11:46:30.320248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:24.575 [2024-11-20 11:46:30.320290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.575 [2024-11-20 11:46:30.320304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.834 [2024-11-20 11:46:30.397374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.834 [2024-11-20 11:46:30.397451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:24.834 [2024-11-20 11:46:30.397494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.834 [2024-11-20 11:46:30.397515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.834 [2024-11-20 11:46:30.397671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.834 [2024-11-20 11:46:30.397693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:24.834 [2024-11-20 11:46:30.397718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.834 [2024-11-20 11:46:30.397731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.834 [2024-11-20 11:46:30.397778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.834 [2024-11-20 11:46:30.397793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:24.834 [2024-11-20 11:46:30.397812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.834 [2024-11-20 11:46:30.397825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.834 [2024-11-20 11:46:30.397968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.834 [2024-11-20 11:46:30.397988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:24.834 [2024-11-20 11:46:30.398008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.834 [2024-11-20 11:46:30.398021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.834 [2024-11-20 11:46:30.398087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.834 [2024-11-20 11:46:30.398108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:24.834 [2024-11-20 11:46:30.398126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.834 [2024-11-20 11:46:30.398139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.834 [2024-11-20 11:46:30.398197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.834 [2024-11-20 11:46:30.398216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:24.834 [2024-11-20 11:46:30.398234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.834 [2024-11-20 11:46:30.398246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.834 [2024-11-20 11:46:30.398308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.834 [2024-11-20 11:46:30.398324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:24.834 [2024-11-20 11:46:30.398339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.834 [2024-11-20 11:46:30.398350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.834 [2024-11-20 11:46:30.398522] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 338.478 ms, result 0 00:32:25.784 11:46:31 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:32:25.784 11:46:31 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:25.784 [2024-11-20 11:46:31.345169] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:32:25.784 [2024-11-20 11:46:31.345397] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78723 ] 00:32:25.784 [2024-11-20 11:46:31.519365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:26.057 [2024-11-20 11:46:31.640000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:26.315 [2024-11-20 11:46:31.971430] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:26.315 [2024-11-20 11:46:31.971543] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:26.575 [2024-11-20 11:46:32.134069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.575 [2024-11-20 11:46:32.134139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:26.575 [2024-11-20 11:46:32.134173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:26.575 [2024-11-20 11:46:32.134185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.575 [2024-11-20 11:46:32.137442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.575 [2024-11-20 11:46:32.137505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:26.575 [2024-11-20 11:46:32.137537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.229 ms 00:32:26.575 [2024-11-20 11:46:32.137562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.575 [2024-11-20 11:46:32.137791] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:26.575 [2024-11-20 11:46:32.138707] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:26.575 [2024-11-20 11:46:32.138761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.575 [2024-11-20 11:46:32.138792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:26.575 [2024-11-20 11:46:32.138805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.981 ms 00:32:26.575 [2024-11-20 11:46:32.138817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.575 [2024-11-20 11:46:32.140934] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:26.575 [2024-11-20 11:46:32.155758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.575 [2024-11-20 11:46:32.155839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:26.575 [2024-11-20 11:46:32.155873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.826 ms 00:32:26.575 [2024-11-20 11:46:32.155885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.575 [2024-11-20 11:46:32.155999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.575 [2024-11-20 11:46:32.156021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:26.575 [2024-11-20 11:46:32.156034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:32:26.575 [2024-11-20 11:46:32.156045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.575 [2024-11-20 11:46:32.164875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.575 [2024-11-20 11:46:32.164934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:26.575 [2024-11-20 11:46:32.164965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.743 ms 00:32:26.575 [2024-11-20 11:46:32.164976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.575 [2024-11-20 11:46:32.165096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.575 [2024-11-20 11:46:32.165117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:26.575 [2024-11-20 11:46:32.165130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:32:26.575 [2024-11-20 11:46:32.165141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.575 [2024-11-20 11:46:32.165196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.575 [2024-11-20 11:46:32.165270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:26.575 [2024-11-20 11:46:32.165291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:26.575 [2024-11-20 11:46:32.165305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.575 [2024-11-20 11:46:32.165343] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:32:26.575 [2024-11-20 11:46:32.170143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.575 [2024-11-20 11:46:32.170199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:26.575 [2024-11-20 11:46:32.170230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.809 ms 00:32:26.575 [2024-11-20 11:46:32.170241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.575 [2024-11-20 11:46:32.170323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.575 [2024-11-20 11:46:32.170342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:26.575 [2024-11-20 11:46:32.170355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:26.575 [2024-11-20 11:46:32.170365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.575 [2024-11-20 11:46:32.170396] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:26.575 [2024-11-20 11:46:32.170444] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:26.575 [2024-11-20 11:46:32.170502] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:26.575 [2024-11-20 11:46:32.170523] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:26.575 [2024-11-20 11:46:32.170645] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:26.575 [2024-11-20 11:46:32.170666] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:26.575 [2024-11-20 11:46:32.170681] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:26.575 [2024-11-20 11:46:32.170697] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:26.575 [2024-11-20 11:46:32.170716] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:26.575 [2024-11-20 11:46:32.170728] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:32:26.575 [2024-11-20 11:46:32.170740] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:26.575 [2024-11-20 11:46:32.170751] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:26.575 [2024-11-20 11:46:32.170762] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:26.575 [2024-11-20 11:46:32.170774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.575 [2024-11-20 11:46:32.170786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:26.575 [2024-11-20 11:46:32.170798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:32:26.575 [2024-11-20 11:46:32.170809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.575 [2024-11-20 11:46:32.170906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.575 [2024-11-20 11:46:32.170922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:26.575 [2024-11-20 11:46:32.170940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:32:26.575 [2024-11-20 11:46:32.170951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.575 [2024-11-20 11:46:32.171065] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:26.575 [2024-11-20 11:46:32.171094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:26.575 [2024-11-20 11:46:32.171108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:26.575 [2024-11-20 11:46:32.171120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:26.575 [2024-11-20 11:46:32.171132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:26.575 [2024-11-20 11:46:32.171143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:26.576 [2024-11-20 11:46:32.171154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:32:26.576 [2024-11-20 11:46:32.171166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:26.576 [2024-11-20 11:46:32.171176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:32:26.576 [2024-11-20 11:46:32.171187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:26.576 [2024-11-20 11:46:32.171197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:26.576 [2024-11-20 11:46:32.171208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:32:26.576 [2024-11-20 11:46:32.171218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:26.576 [2024-11-20 11:46:32.171243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:26.576 [2024-11-20 11:46:32.171255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:32:26.576 [2024-11-20 11:46:32.171266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:26.576 [2024-11-20 11:46:32.171276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:26.576 [2024-11-20 11:46:32.171287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:32:26.576 [2024-11-20 11:46:32.171298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:26.576 [2024-11-20 11:46:32.171310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:26.576 [2024-11-20 11:46:32.171321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:32:26.576 [2024-11-20 11:46:32.171332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:26.576 [2024-11-20 11:46:32.171342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:26.576 [2024-11-20 11:46:32.171353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:32:26.576 [2024-11-20 11:46:32.171364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:26.576 [2024-11-20 11:46:32.171374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:26.576 [2024-11-20 11:46:32.171385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:32:26.576 [2024-11-20 11:46:32.171395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:26.576 [2024-11-20 11:46:32.171406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:26.576 [2024-11-20 11:46:32.171416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:32:26.576 [2024-11-20 11:46:32.171427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:26.576 [2024-11-20 11:46:32.171437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:26.576 [2024-11-20 11:46:32.171448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:32:26.576 [2024-11-20 11:46:32.171458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:26.576 [2024-11-20 11:46:32.171468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:26.576 [2024-11-20 11:46:32.171479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:32:26.576 [2024-11-20 11:46:32.171489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:26.576 [2024-11-20 11:46:32.171501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:26.576 [2024-11-20 11:46:32.171511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:32:26.576 [2024-11-20 11:46:32.171522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:26.576 [2024-11-20 11:46:32.171556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:26.576 [2024-11-20 11:46:32.171588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:32:26.576 [2024-11-20 11:46:32.171600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:26.576 [2024-11-20 11:46:32.171611] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:26.576 [2024-11-20 11:46:32.171623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:26.576 [2024-11-20 11:46:32.171635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:26.576 [2024-11-20 11:46:32.171652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:26.576 [2024-11-20 11:46:32.171664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:26.576 [2024-11-20 11:46:32.171676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:26.576 [2024-11-20 11:46:32.171687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:26.576 [2024-11-20 11:46:32.171698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:26.576 [2024-11-20 11:46:32.171711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:26.576 [2024-11-20 11:46:32.171722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:26.576 [2024-11-20 11:46:32.171735] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:26.576 [2024-11-20 11:46:32.171750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:26.576 [2024-11-20 11:46:32.171763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:32:26.576 [2024-11-20 11:46:32.171775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:32:26.576 [2024-11-20 11:46:32.171787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:32:26.576 [2024-11-20 11:46:32.171798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:32:26.576 [2024-11-20 11:46:32.171810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:32:26.576 [2024-11-20 11:46:32.171822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:32:26.576 [2024-11-20 11:46:32.171833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:32:26.576 [2024-11-20 11:46:32.171845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:32:26.576 [2024-11-20 11:46:32.171857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:32:26.576 [2024-11-20 11:46:32.171868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:32:26.576 [2024-11-20 11:46:32.171880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:32:26.576 [2024-11-20 11:46:32.171892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:32:26.576 [2024-11-20 11:46:32.171904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:32:26.576 [2024-11-20 11:46:32.171931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:32:26.576 [2024-11-20 11:46:32.171942] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:26.576 [2024-11-20 11:46:32.171955] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:26.576 [2024-11-20 11:46:32.171968] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:26.576 [2024-11-20 11:46:32.171979] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:26.576 [2024-11-20 11:46:32.171991] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:26.576 [2024-11-20 11:46:32.172002] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:26.576 [2024-11-20 11:46:32.172015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.576 [2024-11-20 11:46:32.172027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:26.576 [2024-11-20 11:46:32.172045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.014 ms 00:32:26.576 [2024-11-20 11:46:32.172056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.576 [2024-11-20 11:46:32.211868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.576 [2024-11-20 11:46:32.211948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:26.576 [2024-11-20 11:46:32.211983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.737 ms 00:32:26.576 [2024-11-20 11:46:32.211996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.576 [2024-11-20 11:46:32.212185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.576 [2024-11-20 11:46:32.212212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:26.576 [2024-11-20 11:46:32.212242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:32:26.576 [2024-11-20 11:46:32.212254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.576 [2024-11-20 11:46:32.260648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.576 [2024-11-20 11:46:32.260736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:26.576 [2024-11-20 11:46:32.260771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.345 ms 00:32:26.576 [2024-11-20 11:46:32.260789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.576 [2024-11-20 11:46:32.260937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.576 [2024-11-20 11:46:32.260958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:26.576 [2024-11-20 11:46:32.260972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:26.576 [2024-11-20 11:46:32.260983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.576 [2024-11-20 11:46:32.261659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.576 [2024-11-20 11:46:32.261714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:26.576 [2024-11-20 11:46:32.261730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.612 ms 00:32:26.576 [2024-11-20 11:46:32.261751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.576 [2024-11-20 11:46:32.261920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.576 [2024-11-20 11:46:32.261941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:26.576 [2024-11-20 11:46:32.261954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:32:26.576 [2024-11-20 11:46:32.261966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.576 [2024-11-20 11:46:32.280327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.576 [2024-11-20 11:46:32.280388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:26.576 [2024-11-20 11:46:32.280421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.331 ms 00:32:26.577 [2024-11-20 11:46:32.280433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.577 [2024-11-20 11:46:32.295576] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:32:26.577 [2024-11-20 11:46:32.295636] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:26.577 [2024-11-20 11:46:32.295670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.577 [2024-11-20 11:46:32.295683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:26.577 [2024-11-20 11:46:32.295696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.083 ms 00:32:26.577 [2024-11-20 11:46:32.295707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.577 [2024-11-20 11:46:32.321146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.577 [2024-11-20 11:46:32.321216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:26.577 [2024-11-20 11:46:32.321266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.346 ms 00:32:26.577 [2024-11-20 11:46:32.321278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.577 [2024-11-20 11:46:32.335078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.577 [2024-11-20 11:46:32.335136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:26.577 [2024-11-20 11:46:32.335168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.689 ms 00:32:26.577 [2024-11-20 11:46:32.335179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.835 [2024-11-20 11:46:32.348661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.835 [2024-11-20 11:46:32.348702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:26.835 [2024-11-20 11:46:32.348734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.394 ms 00:32:26.835 [2024-11-20 11:46:32.348745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.835 [2024-11-20 11:46:32.349605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.835 [2024-11-20 11:46:32.349661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:26.835 [2024-11-20 11:46:32.349677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.736 ms 00:32:26.835 [2024-11-20 11:46:32.349690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.835 [2024-11-20 11:46:32.418755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.835 [2024-11-20 11:46:32.418851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:26.835 [2024-11-20 11:46:32.418887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.028 ms 00:32:26.835 [2024-11-20 11:46:32.418900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.835 [2024-11-20 11:46:32.429812] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:32:26.835 [2024-11-20 11:46:32.448770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.835 [2024-11-20 11:46:32.448842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:26.835 [2024-11-20 11:46:32.448878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.731 ms 00:32:26.835 [2024-11-20 11:46:32.448890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.835 [2024-11-20 11:46:32.449031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.835 [2024-11-20 11:46:32.449052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:26.835 [2024-11-20 11:46:32.449066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:26.835 [2024-11-20 11:46:32.449078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.835 [2024-11-20 11:46:32.449185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.835 [2024-11-20 11:46:32.449204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:26.835 [2024-11-20 11:46:32.449217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:32:26.835 [2024-11-20 11:46:32.449258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.835 [2024-11-20 11:46:32.449304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.835 [2024-11-20 11:46:32.449326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:26.835 [2024-11-20 11:46:32.449340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:26.835 [2024-11-20 11:46:32.449352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.835 [2024-11-20 11:46:32.449412] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:26.835 [2024-11-20 11:46:32.449430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.835 [2024-11-20 11:46:32.449444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:26.835 [2024-11-20 11:46:32.449457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:32:26.835 [2024-11-20 11:46:32.449469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.835 [2024-11-20 11:46:32.480217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.835 [2024-11-20 11:46:32.480281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:26.835 [2024-11-20 11:46:32.480314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.718 ms 00:32:26.835 [2024-11-20 11:46:32.480326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.835 [2024-11-20 11:46:32.480462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.835 [2024-11-20 11:46:32.480482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:26.835 [2024-11-20 11:46:32.480496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:32:26.835 [2024-11-20 11:46:32.480523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.835 [2024-11-20 11:46:32.481795] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:26.835 [2024-11-20 11:46:32.485443] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 347.308 ms, result 0 00:32:26.835 [2024-11-20 11:46:32.486399] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:26.835 [2024-11-20 11:46:32.501790] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:27.770  [2024-11-20T11:46:34.910Z] Copying: 25/256 [MB] (25 MBps) [2024-11-20T11:46:35.845Z] Copying: 47/256 [MB] (22 MBps) [2024-11-20T11:46:36.780Z] Copying: 70/256 [MB] (22 MBps) [2024-11-20T11:46:37.715Z] Copying: 92/256 [MB] (22 MBps) [2024-11-20T11:46:38.650Z] Copying: 114/256 [MB] (22 MBps) [2024-11-20T11:46:39.585Z] Copying: 137/256 [MB] (22 MBps) [2024-11-20T11:46:40.520Z] Copying: 160/256 [MB] (23 MBps) [2024-11-20T11:46:41.892Z] Copying: 183/256 [MB] (22 MBps) [2024-11-20T11:46:42.825Z] Copying: 205/256 [MB] (22 MBps) [2024-11-20T11:46:43.760Z] Copying: 228/256 [MB] (22 MBps) [2024-11-20T11:46:43.760Z] Copying: 250/256 [MB] (22 MBps) [2024-11-20T11:46:43.760Z] Copying: 256/256 [MB] (average 22 MBps)[2024-11-20 11:46:43.733023] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:37.994 [2024-11-20 11:46:43.744389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:37.994 [2024-11-20 11:46:43.744445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:37.994 [2024-11-20 11:46:43.744492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:37.994 [2024-11-20 11:46:43.744518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.994 [2024-11-20 11:46:43.744578] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:32:37.994 [2024-11-20 11:46:43.747869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:37.994 [2024-11-20 11:46:43.747916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:37.994 [2024-11-20 11:46:43.747945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.268 ms 00:32:37.994 [2024-11-20 11:46:43.747956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.994 [2024-11-20 11:46:43.748279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:37.994 [2024-11-20 11:46:43.748310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:37.994 [2024-11-20 11:46:43.748325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:32:37.994 [2024-11-20 11:46:43.748336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.994 [2024-11-20 11:46:43.751648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:37.994 [2024-11-20 11:46:43.751697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:37.994 [2024-11-20 11:46:43.751727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.289 ms 00:32:37.994 [2024-11-20 11:46:43.751738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:37.994 [2024-11-20 11:46:43.757933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:37.994 [2024-11-20 11:46:43.757985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:37.994 [2024-11-20 11:46:43.758013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.172 ms 00:32:37.995 [2024-11-20 11:46:43.758024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.255 [2024-11-20 11:46:43.786286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.255 [2024-11-20 11:46:43.786360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:38.255 [2024-11-20 11:46:43.786393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.196 ms 00:32:38.255 [2024-11-20 11:46:43.786405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.255 [2024-11-20 11:46:43.803665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.255 [2024-11-20 11:46:43.803757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:38.255 [2024-11-20 11:46:43.803790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.211 ms 00:32:38.255 [2024-11-20 11:46:43.803813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.255 [2024-11-20 11:46:43.803960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.255 [2024-11-20 11:46:43.803979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:38.255 [2024-11-20 11:46:43.804008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:32:38.255 [2024-11-20 11:46:43.804020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.255 [2024-11-20 11:46:43.832102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.255 [2024-11-20 11:46:43.832156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:38.255 [2024-11-20 11:46:43.832186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.021 ms 00:32:38.255 [2024-11-20 11:46:43.832198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.255 [2024-11-20 11:46:43.860152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.255 [2024-11-20 11:46:43.860207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:38.255 [2024-11-20 11:46:43.860238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.894 ms 00:32:38.255 [2024-11-20 11:46:43.860249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.255 [2024-11-20 11:46:43.887634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.255 [2024-11-20 11:46:43.887688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:38.255 [2024-11-20 11:46:43.887719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.341 ms 00:32:38.255 [2024-11-20 11:46:43.887730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.255 [2024-11-20 11:46:43.914918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.255 [2024-11-20 11:46:43.914973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:38.255 [2024-11-20 11:46:43.915004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.106 ms 00:32:38.255 [2024-11-20 11:46:43.915014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.255 [2024-11-20 11:46:43.915069] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:38.255 [2024-11-20 11:46:43.915096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:38.255 [2024-11-20 11:46:43.915110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:38.255 [2024-11-20 11:46:43.915124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:38.255 [2024-11-20 11:46:43.915142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:38.255 [2024-11-20 11:46:43.915153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:38.255 [2024-11-20 11:46:43.915164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:38.255 [2024-11-20 11:46:43.915175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:38.255 [2024-11-20 11:46:43.915186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:38.255 [2024-11-20 11:46:43.915197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:38.255 [2024-11-20 11:46:43.915209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:38.255 [2024-11-20 11:46:43.915220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.915994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.916005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.916016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.916028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.916040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.916052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.916063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.916075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.916086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.916097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.916120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.916131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.916146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.916158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.916170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.916181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.916193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.916204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:38.256 [2024-11-20 11:46:43.916216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:38.257 [2024-11-20 11:46:43.916227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:38.257 [2024-11-20 11:46:43.916239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:38.257 [2024-11-20 11:46:43.916250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:38.257 [2024-11-20 11:46:43.916261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:38.257 [2024-11-20 11:46:43.916273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:38.257 [2024-11-20 11:46:43.916285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:38.257 [2024-11-20 11:46:43.916297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:38.257 [2024-11-20 11:46:43.916310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:38.257 [2024-11-20 11:46:43.916336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:38.257 [2024-11-20 11:46:43.916348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:38.257 [2024-11-20 11:46:43.916360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:38.257 [2024-11-20 11:46:43.916373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:38.257 [2024-11-20 11:46:43.916385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:38.257 [2024-11-20 11:46:43.916405] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:38.257 [2024-11-20 11:46:43.916416] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e3116d21-5f36-46d4-8ab1-bab032ddcd4c 00:32:38.257 [2024-11-20 11:46:43.916428] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:38.257 [2024-11-20 11:46:43.916440] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:38.257 [2024-11-20 11:46:43.916451] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:38.257 [2024-11-20 11:46:43.916462] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:38.257 [2024-11-20 11:46:43.916473] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:38.257 [2024-11-20 11:46:43.916485] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:38.257 [2024-11-20 11:46:43.916495] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:38.257 [2024-11-20 11:46:43.916506] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:38.257 [2024-11-20 11:46:43.916519] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:38.257 [2024-11-20 11:46:43.916532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.257 [2024-11-20 11:46:43.916562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:38.257 [2024-11-20 11:46:43.916592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.477 ms 00:32:38.257 [2024-11-20 11:46:43.916605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.257 [2024-11-20 11:46:43.932067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.257 [2024-11-20 11:46:43.932122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:38.257 [2024-11-20 11:46:43.932154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.434 ms 00:32:38.257 [2024-11-20 11:46:43.932165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.257 [2024-11-20 11:46:43.932694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.257 [2024-11-20 11:46:43.932730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:38.257 [2024-11-20 11:46:43.932745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.484 ms 00:32:38.257 [2024-11-20 11:46:43.932756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.257 [2024-11-20 11:46:43.975194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.257 [2024-11-20 11:46:43.975256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:38.257 [2024-11-20 11:46:43.975286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.257 [2024-11-20 11:46:43.975298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.257 [2024-11-20 11:46:43.975424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.257 [2024-11-20 11:46:43.975446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:38.257 [2024-11-20 11:46:43.975459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.257 [2024-11-20 11:46:43.975470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.257 [2024-11-20 11:46:43.975589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.257 [2024-11-20 11:46:43.975610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:38.257 [2024-11-20 11:46:43.975624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.257 [2024-11-20 11:46:43.975636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.257 [2024-11-20 11:46:43.975665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.257 [2024-11-20 11:46:43.975685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:38.257 [2024-11-20 11:46:43.975712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.257 [2024-11-20 11:46:43.975724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.542 [2024-11-20 11:46:44.068054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.542 [2024-11-20 11:46:44.068133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:38.542 [2024-11-20 11:46:44.068168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.542 [2024-11-20 11:46:44.068180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.542 [2024-11-20 11:46:44.150718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.542 [2024-11-20 11:46:44.150780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:38.542 [2024-11-20 11:46:44.150799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.542 [2024-11-20 11:46:44.150812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.542 [2024-11-20 11:46:44.150893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.542 [2024-11-20 11:46:44.150911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:38.542 [2024-11-20 11:46:44.150939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.542 [2024-11-20 11:46:44.150966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.542 [2024-11-20 11:46:44.151035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.542 [2024-11-20 11:46:44.151049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:38.542 [2024-11-20 11:46:44.151068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.542 [2024-11-20 11:46:44.151079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.542 [2024-11-20 11:46:44.151203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.542 [2024-11-20 11:46:44.151232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:38.542 [2024-11-20 11:46:44.151247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.542 [2024-11-20 11:46:44.151259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.542 [2024-11-20 11:46:44.151332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.542 [2024-11-20 11:46:44.151351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:38.542 [2024-11-20 11:46:44.151365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.542 [2024-11-20 11:46:44.151383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.542 [2024-11-20 11:46:44.151434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.542 [2024-11-20 11:46:44.151451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:38.542 [2024-11-20 11:46:44.151464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.542 [2024-11-20 11:46:44.151475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.542 [2024-11-20 11:46:44.151533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.542 [2024-11-20 11:46:44.151590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:38.542 [2024-11-20 11:46:44.151611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.542 [2024-11-20 11:46:44.151623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.542 [2024-11-20 11:46:44.151840] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 407.424 ms, result 0 00:32:39.478 00:32:39.478 00:32:39.478 11:46:45 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:32:39.478 11:46:45 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:32:40.414 11:46:45 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:40.414 [2024-11-20 11:46:45.944241] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:32:40.414 [2024-11-20 11:46:45.944464] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78877 ] 00:32:40.414 [2024-11-20 11:46:46.120442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:40.672 [2024-11-20 11:46:46.237384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:40.931 [2024-11-20 11:46:46.560993] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:40.931 [2024-11-20 11:46:46.561080] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:41.189 [2024-11-20 11:46:46.724364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.189 [2024-11-20 11:46:46.724433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:41.189 [2024-11-20 11:46:46.724452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:41.190 [2024-11-20 11:46:46.724464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.190 [2024-11-20 11:46:46.727773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.190 [2024-11-20 11:46:46.727832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:41.190 [2024-11-20 11:46:46.727848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.282 ms 00:32:41.190 [2024-11-20 11:46:46.727859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.190 [2024-11-20 11:46:46.728028] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:41.190 [2024-11-20 11:46:46.728957] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:41.190 [2024-11-20 11:46:46.729027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.190 [2024-11-20 11:46:46.729041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:41.190 [2024-11-20 11:46:46.729053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.025 ms 00:32:41.190 [2024-11-20 11:46:46.729064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.190 [2024-11-20 11:46:46.731186] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:41.190 [2024-11-20 11:46:46.746301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.190 [2024-11-20 11:46:46.746357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:41.190 [2024-11-20 11:46:46.746375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.117 ms 00:32:41.190 [2024-11-20 11:46:46.746387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.190 [2024-11-20 11:46:46.746566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.190 [2024-11-20 11:46:46.746589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:41.190 [2024-11-20 11:46:46.746603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:32:41.190 [2024-11-20 11:46:46.746614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.190 [2024-11-20 11:46:46.754955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.190 [2024-11-20 11:46:46.755014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:41.190 [2024-11-20 11:46:46.755029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.282 ms 00:32:41.190 [2024-11-20 11:46:46.755041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.190 [2024-11-20 11:46:46.755162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.190 [2024-11-20 11:46:46.755182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:41.190 [2024-11-20 11:46:46.755195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:32:41.190 [2024-11-20 11:46:46.755205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.190 [2024-11-20 11:46:46.755242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.190 [2024-11-20 11:46:46.755263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:41.190 [2024-11-20 11:46:46.755275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:41.190 [2024-11-20 11:46:46.755302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.190 [2024-11-20 11:46:46.755333] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:32:41.190 [2024-11-20 11:46:46.759955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.190 [2024-11-20 11:46:46.760014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:41.190 [2024-11-20 11:46:46.760029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.631 ms 00:32:41.190 [2024-11-20 11:46:46.760039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.190 [2024-11-20 11:46:46.760121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.190 [2024-11-20 11:46:46.760140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:41.190 [2024-11-20 11:46:46.760153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:41.190 [2024-11-20 11:46:46.760163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.190 [2024-11-20 11:46:46.760192] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:41.190 [2024-11-20 11:46:46.760259] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:41.190 [2024-11-20 11:46:46.760340] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:41.190 [2024-11-20 11:46:46.760361] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:41.190 [2024-11-20 11:46:46.760478] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:41.190 [2024-11-20 11:46:46.760505] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:41.190 [2024-11-20 11:46:46.760525] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:41.190 [2024-11-20 11:46:46.760555] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:41.190 [2024-11-20 11:46:46.760578] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:41.190 [2024-11-20 11:46:46.760590] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:32:41.190 [2024-11-20 11:46:46.760601] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:41.190 [2024-11-20 11:46:46.760611] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:41.190 [2024-11-20 11:46:46.760622] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:41.190 [2024-11-20 11:46:46.760634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.190 [2024-11-20 11:46:46.760645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:41.190 [2024-11-20 11:46:46.760657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:32:41.190 [2024-11-20 11:46:46.760667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.190 [2024-11-20 11:46:46.760769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.190 [2024-11-20 11:46:46.760785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:41.190 [2024-11-20 11:46:46.760802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:32:41.190 [2024-11-20 11:46:46.760813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.190 [2024-11-20 11:46:46.760938] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:41.190 [2024-11-20 11:46:46.760956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:41.190 [2024-11-20 11:46:46.760969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:41.190 [2024-11-20 11:46:46.760980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:41.190 [2024-11-20 11:46:46.760991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:41.190 [2024-11-20 11:46:46.761001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:41.190 [2024-11-20 11:46:46.761011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:32:41.190 [2024-11-20 11:46:46.761021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:41.190 [2024-11-20 11:46:46.761031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:32:41.190 [2024-11-20 11:46:46.761040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:41.190 [2024-11-20 11:46:46.761050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:41.190 [2024-11-20 11:46:46.761059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:32:41.190 [2024-11-20 11:46:46.761069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:41.190 [2024-11-20 11:46:46.761092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:41.190 [2024-11-20 11:46:46.761102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:32:41.190 [2024-11-20 11:46:46.761112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:41.190 [2024-11-20 11:46:46.761122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:41.190 [2024-11-20 11:46:46.761132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:32:41.190 [2024-11-20 11:46:46.761142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:41.190 [2024-11-20 11:46:46.761152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:41.190 [2024-11-20 11:46:46.761164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:32:41.190 [2024-11-20 11:46:46.761174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:41.190 [2024-11-20 11:46:46.761184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:41.190 [2024-11-20 11:46:46.761194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:32:41.190 [2024-11-20 11:46:46.761203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:41.190 [2024-11-20 11:46:46.761213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:41.190 [2024-11-20 11:46:46.761223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:32:41.190 [2024-11-20 11:46:46.761244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:41.190 [2024-11-20 11:46:46.761255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:41.190 [2024-11-20 11:46:46.761265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:32:41.190 [2024-11-20 11:46:46.761275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:41.190 [2024-11-20 11:46:46.761285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:41.190 [2024-11-20 11:46:46.761295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:32:41.190 [2024-11-20 11:46:46.761304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:41.190 [2024-11-20 11:46:46.761314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:41.190 [2024-11-20 11:46:46.761324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:32:41.190 [2024-11-20 11:46:46.761334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:41.190 [2024-11-20 11:46:46.761344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:41.190 [2024-11-20 11:46:46.761353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:32:41.190 [2024-11-20 11:46:46.761363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:41.190 [2024-11-20 11:46:46.761373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:41.190 [2024-11-20 11:46:46.761382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:32:41.190 [2024-11-20 11:46:46.761392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:41.190 [2024-11-20 11:46:46.761401] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:41.190 [2024-11-20 11:46:46.761412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:41.190 [2024-11-20 11:46:46.761422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:41.190 [2024-11-20 11:46:46.761439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:41.190 [2024-11-20 11:46:46.761450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:41.190 [2024-11-20 11:46:46.761477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:41.190 [2024-11-20 11:46:46.761487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:41.190 [2024-11-20 11:46:46.761497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:41.190 [2024-11-20 11:46:46.761512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:41.190 [2024-11-20 11:46:46.761524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:41.190 [2024-11-20 11:46:46.761537] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:41.190 [2024-11-20 11:46:46.761564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:41.190 [2024-11-20 11:46:46.761580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:32:41.190 [2024-11-20 11:46:46.761591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:32:41.190 [2024-11-20 11:46:46.761602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:32:41.190 [2024-11-20 11:46:46.761614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:32:41.190 [2024-11-20 11:46:46.761625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:32:41.190 [2024-11-20 11:46:46.761636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:32:41.190 [2024-11-20 11:46:46.761647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:32:41.190 [2024-11-20 11:46:46.761657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:32:41.190 [2024-11-20 11:46:46.761668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:32:41.190 [2024-11-20 11:46:46.761679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:32:41.190 [2024-11-20 11:46:46.761690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:32:41.190 [2024-11-20 11:46:46.761701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:32:41.190 [2024-11-20 11:46:46.761711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:32:41.190 [2024-11-20 11:46:46.761722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:32:41.190 [2024-11-20 11:46:46.761733] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:41.190 [2024-11-20 11:46:46.761745] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:41.190 [2024-11-20 11:46:46.761757] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:41.190 [2024-11-20 11:46:46.761768] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:41.190 [2024-11-20 11:46:46.761779] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:41.191 [2024-11-20 11:46:46.761790] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:41.191 [2024-11-20 11:46:46.761803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.191 [2024-11-20 11:46:46.761815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:41.191 [2024-11-20 11:46:46.761833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.931 ms 00:32:41.191 [2024-11-20 11:46:46.761843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.191 [2024-11-20 11:46:46.798652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.191 [2024-11-20 11:46:46.798720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:41.191 [2024-11-20 11:46:46.798738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.733 ms 00:32:41.191 [2024-11-20 11:46:46.798749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.191 [2024-11-20 11:46:46.798944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.191 [2024-11-20 11:46:46.799009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:41.191 [2024-11-20 11:46:46.799022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:32:41.191 [2024-11-20 11:46:46.799033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.191 [2024-11-20 11:46:46.858476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.191 [2024-11-20 11:46:46.858549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:41.191 [2024-11-20 11:46:46.858568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.407 ms 00:32:41.191 [2024-11-20 11:46:46.858585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.191 [2024-11-20 11:46:46.858721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.191 [2024-11-20 11:46:46.858740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:41.191 [2024-11-20 11:46:46.858753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:41.191 [2024-11-20 11:46:46.858763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.191 [2024-11-20 11:46:46.859358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.191 [2024-11-20 11:46:46.859406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:41.191 [2024-11-20 11:46:46.859421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:32:41.191 [2024-11-20 11:46:46.859440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.191 [2024-11-20 11:46:46.859621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.191 [2024-11-20 11:46:46.859656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:41.191 [2024-11-20 11:46:46.859668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:32:41.191 [2024-11-20 11:46:46.859693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.191 [2024-11-20 11:46:46.877375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.191 [2024-11-20 11:46:46.877432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:41.191 [2024-11-20 11:46:46.877449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.651 ms 00:32:41.191 [2024-11-20 11:46:46.877460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.191 [2024-11-20 11:46:46.892159] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:32:41.191 [2024-11-20 11:46:46.892217] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:41.191 [2024-11-20 11:46:46.892234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.191 [2024-11-20 11:46:46.892246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:41.191 [2024-11-20 11:46:46.892258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.627 ms 00:32:41.191 [2024-11-20 11:46:46.892268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.191 [2024-11-20 11:46:46.917867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.191 [2024-11-20 11:46:46.917936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:41.191 [2024-11-20 11:46:46.917953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.508 ms 00:32:41.191 [2024-11-20 11:46:46.917963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.191 [2024-11-20 11:46:46.931642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.191 [2024-11-20 11:46:46.931698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:41.191 [2024-11-20 11:46:46.931712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.589 ms 00:32:41.191 [2024-11-20 11:46:46.931722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.191 [2024-11-20 11:46:46.945171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.191 [2024-11-20 11:46:46.945226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:41.191 [2024-11-20 11:46:46.945247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.365 ms 00:32:41.191 [2024-11-20 11:46:46.945257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.191 [2024-11-20 11:46:46.946058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.191 [2024-11-20 11:46:46.946121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:41.191 [2024-11-20 11:46:46.946135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.682 ms 00:32:41.191 [2024-11-20 11:46:46.946146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.449 [2024-11-20 11:46:47.013788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.449 [2024-11-20 11:46:47.013878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:41.449 [2024-11-20 11:46:47.013898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.610 ms 00:32:41.449 [2024-11-20 11:46:47.013909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.449 [2024-11-20 11:46:47.024641] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:32:41.449 [2024-11-20 11:46:47.042821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.449 [2024-11-20 11:46:47.042898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:41.449 [2024-11-20 11:46:47.042917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.770 ms 00:32:41.449 [2024-11-20 11:46:47.042928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.449 [2024-11-20 11:46:47.043061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.449 [2024-11-20 11:46:47.043080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:41.449 [2024-11-20 11:46:47.043093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:41.449 [2024-11-20 11:46:47.043103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.449 [2024-11-20 11:46:47.043225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.449 [2024-11-20 11:46:47.043249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:41.449 [2024-11-20 11:46:47.043263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:32:41.449 [2024-11-20 11:46:47.043274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.449 [2024-11-20 11:46:47.043320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.449 [2024-11-20 11:46:47.043339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:41.449 [2024-11-20 11:46:47.043351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:41.449 [2024-11-20 11:46:47.043361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.449 [2024-11-20 11:46:47.043402] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:41.449 [2024-11-20 11:46:47.043418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.449 [2024-11-20 11:46:47.043429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:41.449 [2024-11-20 11:46:47.043440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:32:41.449 [2024-11-20 11:46:47.043450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.449 [2024-11-20 11:46:47.070571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.449 [2024-11-20 11:46:47.070625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:41.449 [2024-11-20 11:46:47.070641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.090 ms 00:32:41.449 [2024-11-20 11:46:47.070652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.449 [2024-11-20 11:46:47.070804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.449 [2024-11-20 11:46:47.070824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:41.449 [2024-11-20 11:46:47.070837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:32:41.449 [2024-11-20 11:46:47.070862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.449 [2024-11-20 11:46:47.072171] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:41.449 [2024-11-20 11:46:47.075746] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 347.410 ms, result 0 00:32:41.449 [2024-11-20 11:46:47.076666] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:41.449 [2024-11-20 11:46:47.091317] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:41.707  [2024-11-20T11:46:47.474Z] Copying: 4096/4096 [kB] (average 22 MBps)[2024-11-20 11:46:47.268708] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:41.708 [2024-11-20 11:46:47.279682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.708 [2024-11-20 11:46:47.279738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:41.708 [2024-11-20 11:46:47.279755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:41.708 [2024-11-20 11:46:47.279772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.708 [2024-11-20 11:46:47.279800] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:32:41.708 [2024-11-20 11:46:47.283040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.708 [2024-11-20 11:46:47.283088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:41.708 [2024-11-20 11:46:47.283102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.220 ms 00:32:41.708 [2024-11-20 11:46:47.283112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.708 [2024-11-20 11:46:47.285035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.708 [2024-11-20 11:46:47.285092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:41.708 [2024-11-20 11:46:47.285107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.896 ms 00:32:41.708 [2024-11-20 11:46:47.285124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.708 [2024-11-20 11:46:47.288651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.708 [2024-11-20 11:46:47.288702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:41.708 [2024-11-20 11:46:47.288727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.505 ms 00:32:41.708 [2024-11-20 11:46:47.288738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.708 [2024-11-20 11:46:47.295128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.708 [2024-11-20 11:46:47.295178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:41.708 [2024-11-20 11:46:47.295192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.350 ms 00:32:41.708 [2024-11-20 11:46:47.295202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.708 [2024-11-20 11:46:47.321390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.708 [2024-11-20 11:46:47.321448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:41.708 [2024-11-20 11:46:47.321463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.125 ms 00:32:41.708 [2024-11-20 11:46:47.321473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.708 [2024-11-20 11:46:47.337581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.708 [2024-11-20 11:46:47.337655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:41.708 [2024-11-20 11:46:47.337677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.051 ms 00:32:41.708 [2024-11-20 11:46:47.337688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.708 [2024-11-20 11:46:47.337827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.708 [2024-11-20 11:46:47.337846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:41.708 [2024-11-20 11:46:47.337857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:32:41.708 [2024-11-20 11:46:47.337867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.708 [2024-11-20 11:46:47.364641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.708 [2024-11-20 11:46:47.364695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:41.708 [2024-11-20 11:46:47.364710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.707 ms 00:32:41.708 [2024-11-20 11:46:47.364720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.708 [2024-11-20 11:46:47.390662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.708 [2024-11-20 11:46:47.390716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:41.708 [2024-11-20 11:46:47.390730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.885 ms 00:32:41.708 [2024-11-20 11:46:47.390741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.708 [2024-11-20 11:46:47.416559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.708 [2024-11-20 11:46:47.416611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:41.708 [2024-11-20 11:46:47.416625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.758 ms 00:32:41.708 [2024-11-20 11:46:47.416635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.708 [2024-11-20 11:46:47.442598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.708 [2024-11-20 11:46:47.442661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:41.708 [2024-11-20 11:46:47.442677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.876 ms 00:32:41.708 [2024-11-20 11:46:47.442687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.708 [2024-11-20 11:46:47.442743] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:41.708 [2024-11-20 11:46:47.442766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.442778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.442789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.442800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.442810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.442819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.442829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.442839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.442849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.442858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.442868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.442878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.442903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.442929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.442940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.442950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.442961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.442971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.442981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.442992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:41.708 [2024-11-20 11:46:47.443893] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:41.708 [2024-11-20 11:46:47.443904] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e3116d21-5f36-46d4-8ab1-bab032ddcd4c 00:32:41.708 [2024-11-20 11:46:47.443915] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:41.708 [2024-11-20 11:46:47.443925] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:41.708 [2024-11-20 11:46:47.443935] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:41.708 [2024-11-20 11:46:47.443946] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:41.708 [2024-11-20 11:46:47.443956] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:41.708 [2024-11-20 11:46:47.443966] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:41.708 [2024-11-20 11:46:47.443976] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:41.708 [2024-11-20 11:46:47.443985] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:41.708 [2024-11-20 11:46:47.443994] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:41.709 [2024-11-20 11:46:47.444004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.709 [2024-11-20 11:46:47.444021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:41.709 [2024-11-20 11:46:47.444032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.263 ms 00:32:41.709 [2024-11-20 11:46:47.444042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.709 [2024-11-20 11:46:47.458776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.709 [2024-11-20 11:46:47.458828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:41.709 [2024-11-20 11:46:47.458843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.709 ms 00:32:41.709 [2024-11-20 11:46:47.458854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.709 [2024-11-20 11:46:47.459339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.709 [2024-11-20 11:46:47.459369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:41.709 [2024-11-20 11:46:47.459383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:32:41.709 [2024-11-20 11:46:47.459393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.024 [2024-11-20 11:46:47.500475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:42.024 [2024-11-20 11:46:47.500541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:42.024 [2024-11-20 11:46:47.500557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:42.024 [2024-11-20 11:46:47.500569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.024 [2024-11-20 11:46:47.500680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:42.024 [2024-11-20 11:46:47.500697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:42.024 [2024-11-20 11:46:47.500708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:42.024 [2024-11-20 11:46:47.500718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.024 [2024-11-20 11:46:47.500808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:42.024 [2024-11-20 11:46:47.500826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:42.024 [2024-11-20 11:46:47.500838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:42.024 [2024-11-20 11:46:47.500847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.024 [2024-11-20 11:46:47.500871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:42.024 [2024-11-20 11:46:47.500892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:42.024 [2024-11-20 11:46:47.500903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:42.024 [2024-11-20 11:46:47.500913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.024 [2024-11-20 11:46:47.598515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:42.024 [2024-11-20 11:46:47.598604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:42.024 [2024-11-20 11:46:47.598622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:42.024 [2024-11-20 11:46:47.598634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.024 [2024-11-20 11:46:47.674660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:42.024 [2024-11-20 11:46:47.674731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:42.024 [2024-11-20 11:46:47.674749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:42.024 [2024-11-20 11:46:47.674761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.024 [2024-11-20 11:46:47.674871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:42.024 [2024-11-20 11:46:47.674889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:42.024 [2024-11-20 11:46:47.674901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:42.024 [2024-11-20 11:46:47.674912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.024 [2024-11-20 11:46:47.674948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:42.024 [2024-11-20 11:46:47.674961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:42.024 [2024-11-20 11:46:47.674979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:42.024 [2024-11-20 11:46:47.674989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.024 [2024-11-20 11:46:47.675144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:42.024 [2024-11-20 11:46:47.675163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:42.024 [2024-11-20 11:46:47.675175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:42.024 [2024-11-20 11:46:47.675187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.024 [2024-11-20 11:46:47.675238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:42.024 [2024-11-20 11:46:47.675265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:42.024 [2024-11-20 11:46:47.675277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:42.025 [2024-11-20 11:46:47.675294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.025 [2024-11-20 11:46:47.675343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:42.025 [2024-11-20 11:46:47.675358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:42.025 [2024-11-20 11:46:47.675375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:42.025 [2024-11-20 11:46:47.675385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.025 [2024-11-20 11:46:47.675439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:42.025 [2024-11-20 11:46:47.675456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:42.025 [2024-11-20 11:46:47.675474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:42.025 [2024-11-20 11:46:47.675485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.025 [2024-11-20 11:46:47.675677] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 396.001 ms, result 0 00:32:42.956 00:32:42.956 00:32:42.956 11:46:48 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=78908 00:32:42.956 11:46:48 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:32:42.956 11:46:48 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 78908 00:32:42.956 11:46:48 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78908 ']' 00:32:42.956 11:46:48 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:42.956 11:46:48 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:42.956 11:46:48 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:42.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:42.956 11:46:48 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:42.956 11:46:48 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:32:43.214 [2024-11-20 11:46:48.839693] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:32:43.214 [2024-11-20 11:46:48.839873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78908 ] 00:32:43.471 [2024-11-20 11:46:49.019137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:43.471 [2024-11-20 11:46:49.162111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:44.848 11:46:50 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:44.848 11:46:50 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:32:44.848 11:46:50 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:32:44.848 [2024-11-20 11:46:50.428039] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:44.848 [2024-11-20 11:46:50.428146] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:45.109 [2024-11-20 11:46:50.618699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.109 [2024-11-20 11:46:50.618786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:45.109 [2024-11-20 11:46:50.618818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:32:45.109 [2024-11-20 11:46:50.618832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.109 [2024-11-20 11:46:50.623043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.109 [2024-11-20 11:46:50.623094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:45.109 [2024-11-20 11:46:50.623112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.184 ms 00:32:45.109 [2024-11-20 11:46:50.623124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.109 [2024-11-20 11:46:50.623242] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:45.109 [2024-11-20 11:46:50.624044] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:45.109 [2024-11-20 11:46:50.624078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.109 [2024-11-20 11:46:50.624092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:45.109 [2024-11-20 11:46:50.624121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.849 ms 00:32:45.109 [2024-11-20 11:46:50.624133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.109 [2024-11-20 11:46:50.626920] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:45.109 [2024-11-20 11:46:50.642882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.109 [2024-11-20 11:46:50.642938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:45.109 [2024-11-20 11:46:50.642956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.968 ms 00:32:45.109 [2024-11-20 11:46:50.642971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.109 [2024-11-20 11:46:50.643079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.109 [2024-11-20 11:46:50.643104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:45.109 [2024-11-20 11:46:50.643117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:32:45.109 [2024-11-20 11:46:50.643130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.109 [2024-11-20 11:46:50.655100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.109 [2024-11-20 11:46:50.655168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:45.109 [2024-11-20 11:46:50.655186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.907 ms 00:32:45.109 [2024-11-20 11:46:50.655201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.109 [2024-11-20 11:46:50.655394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.109 [2024-11-20 11:46:50.655431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:45.109 [2024-11-20 11:46:50.655445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:32:45.109 [2024-11-20 11:46:50.655465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.109 [2024-11-20 11:46:50.655513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.109 [2024-11-20 11:46:50.655531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:45.109 [2024-11-20 11:46:50.655544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:32:45.109 [2024-11-20 11:46:50.655582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.109 [2024-11-20 11:46:50.655623] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:32:45.109 [2024-11-20 11:46:50.660683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.109 [2024-11-20 11:46:50.660742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:45.109 [2024-11-20 11:46:50.660760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.068 ms 00:32:45.109 [2024-11-20 11:46:50.660772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.109 [2024-11-20 11:46:50.660839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.109 [2024-11-20 11:46:50.660857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:45.109 [2024-11-20 11:46:50.660872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:45.109 [2024-11-20 11:46:50.660885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.109 [2024-11-20 11:46:50.660918] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:45.109 [2024-11-20 11:46:50.660946] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:45.109 [2024-11-20 11:46:50.660996] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:45.109 [2024-11-20 11:46:50.661018] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:45.109 [2024-11-20 11:46:50.661117] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:45.109 [2024-11-20 11:46:50.661133] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:45.109 [2024-11-20 11:46:50.661153] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:45.109 [2024-11-20 11:46:50.661170] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:45.109 [2024-11-20 11:46:50.661187] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:45.109 [2024-11-20 11:46:50.661198] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:32:45.109 [2024-11-20 11:46:50.661211] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:45.109 [2024-11-20 11:46:50.661222] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:45.110 [2024-11-20 11:46:50.661282] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:45.110 [2024-11-20 11:46:50.661298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.110 [2024-11-20 11:46:50.661313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:45.110 [2024-11-20 11:46:50.661325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 00:32:45.110 [2024-11-20 11:46:50.661351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.110 [2024-11-20 11:46:50.661449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.110 [2024-11-20 11:46:50.661473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:45.110 [2024-11-20 11:46:50.661487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:32:45.110 [2024-11-20 11:46:50.661506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.110 [2024-11-20 11:46:50.661645] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:45.110 [2024-11-20 11:46:50.661674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:45.110 [2024-11-20 11:46:50.661689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:45.110 [2024-11-20 11:46:50.661707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:45.110 [2024-11-20 11:46:50.661734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:45.110 [2024-11-20 11:46:50.661750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:45.110 [2024-11-20 11:46:50.661762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:32:45.110 [2024-11-20 11:46:50.661783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:45.110 [2024-11-20 11:46:50.661796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:32:45.110 [2024-11-20 11:46:50.661811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:45.110 [2024-11-20 11:46:50.661823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:45.110 [2024-11-20 11:46:50.661839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:32:45.110 [2024-11-20 11:46:50.661851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:45.110 [2024-11-20 11:46:50.661867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:45.110 [2024-11-20 11:46:50.661879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:32:45.110 [2024-11-20 11:46:50.661895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:45.110 [2024-11-20 11:46:50.661906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:45.110 [2024-11-20 11:46:50.661922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:32:45.110 [2024-11-20 11:46:50.661935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:45.110 [2024-11-20 11:46:50.661951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:45.110 [2024-11-20 11:46:50.661976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:32:45.110 [2024-11-20 11:46:50.661994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:45.110 [2024-11-20 11:46:50.662005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:45.110 [2024-11-20 11:46:50.662025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:32:45.110 [2024-11-20 11:46:50.662038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:45.110 [2024-11-20 11:46:50.662053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:45.110 [2024-11-20 11:46:50.662066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:32:45.110 [2024-11-20 11:46:50.662082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:45.110 [2024-11-20 11:46:50.662095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:45.110 [2024-11-20 11:46:50.662112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:32:45.110 [2024-11-20 11:46:50.662123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:45.110 [2024-11-20 11:46:50.662154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:45.110 [2024-11-20 11:46:50.662167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:32:45.110 [2024-11-20 11:46:50.662184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:45.110 [2024-11-20 11:46:50.662197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:45.110 [2024-11-20 11:46:50.662213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:32:45.110 [2024-11-20 11:46:50.662225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:45.110 [2024-11-20 11:46:50.662242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:45.110 [2024-11-20 11:46:50.662254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:32:45.110 [2024-11-20 11:46:50.662274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:45.110 [2024-11-20 11:46:50.662286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:45.110 [2024-11-20 11:46:50.662319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:32:45.110 [2024-11-20 11:46:50.662331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:45.110 [2024-11-20 11:46:50.662348] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:45.110 [2024-11-20 11:46:50.662361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:45.110 [2024-11-20 11:46:50.662387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:45.110 [2024-11-20 11:46:50.662399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:45.110 [2024-11-20 11:46:50.662419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:45.110 [2024-11-20 11:46:50.662432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:45.110 [2024-11-20 11:46:50.662448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:45.110 [2024-11-20 11:46:50.662461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:45.110 [2024-11-20 11:46:50.662478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:45.110 [2024-11-20 11:46:50.662491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:45.110 [2024-11-20 11:46:50.662509] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:45.110 [2024-11-20 11:46:50.662524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:45.110 [2024-11-20 11:46:50.662589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:32:45.110 [2024-11-20 11:46:50.662604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:32:45.110 [2024-11-20 11:46:50.662623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:32:45.110 [2024-11-20 11:46:50.662636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:32:45.110 [2024-11-20 11:46:50.662653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:32:45.110 [2024-11-20 11:46:50.662691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:32:45.110 [2024-11-20 11:46:50.662707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:32:45.110 [2024-11-20 11:46:50.662719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:32:45.110 [2024-11-20 11:46:50.662735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:32:45.110 [2024-11-20 11:46:50.662748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:32:45.110 [2024-11-20 11:46:50.662763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:32:45.110 [2024-11-20 11:46:50.662775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:32:45.111 [2024-11-20 11:46:50.662791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:32:45.111 [2024-11-20 11:46:50.662804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:32:45.111 [2024-11-20 11:46:50.662820] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:45.111 [2024-11-20 11:46:50.662833] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:45.111 [2024-11-20 11:46:50.662855] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:45.111 [2024-11-20 11:46:50.662867] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:45.111 [2024-11-20 11:46:50.662883] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:45.111 [2024-11-20 11:46:50.662895] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:45.111 [2024-11-20 11:46:50.662912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.111 [2024-11-20 11:46:50.662924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:45.111 [2024-11-20 11:46:50.662941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.335 ms 00:32:45.111 [2024-11-20 11:46:50.662953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.111 [2024-11-20 11:46:50.706797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.111 [2024-11-20 11:46:50.706871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:45.111 [2024-11-20 11:46:50.706897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.745 ms 00:32:45.111 [2024-11-20 11:46:50.706911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.111 [2024-11-20 11:46:50.707116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.111 [2024-11-20 11:46:50.707135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:45.111 [2024-11-20 11:46:50.707155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:32:45.111 [2024-11-20 11:46:50.707167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.111 [2024-11-20 11:46:50.757201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.111 [2024-11-20 11:46:50.757329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:45.111 [2024-11-20 11:46:50.757384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.990 ms 00:32:45.111 [2024-11-20 11:46:50.757399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.111 [2024-11-20 11:46:50.757685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.111 [2024-11-20 11:46:50.757713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:45.111 [2024-11-20 11:46:50.757751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:45.111 [2024-11-20 11:46:50.757764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.111 [2024-11-20 11:46:50.758503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.111 [2024-11-20 11:46:50.758548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:45.111 [2024-11-20 11:46:50.758593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.699 ms 00:32:45.111 [2024-11-20 11:46:50.758607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.111 [2024-11-20 11:46:50.758792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.111 [2024-11-20 11:46:50.758816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:45.111 [2024-11-20 11:46:50.758837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:32:45.111 [2024-11-20 11:46:50.758851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.111 [2024-11-20 11:46:50.783613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.111 [2024-11-20 11:46:50.783700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:45.111 [2024-11-20 11:46:50.783727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.720 ms 00:32:45.111 [2024-11-20 11:46:50.783741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.111 [2024-11-20 11:46:50.800676] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:45.111 [2024-11-20 11:46:50.800739] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:45.111 [2024-11-20 11:46:50.800768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.111 [2024-11-20 11:46:50.800784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:45.111 [2024-11-20 11:46:50.800806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.831 ms 00:32:45.111 [2024-11-20 11:46:50.800820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.111 [2024-11-20 11:46:50.829395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.111 [2024-11-20 11:46:50.829511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:45.111 [2024-11-20 11:46:50.829566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.425 ms 00:32:45.111 [2024-11-20 11:46:50.829600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.111 [2024-11-20 11:46:50.847582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.111 [2024-11-20 11:46:50.847673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:45.111 [2024-11-20 11:46:50.847726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.631 ms 00:32:45.111 [2024-11-20 11:46:50.847741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.111 [2024-11-20 11:46:50.862159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.111 [2024-11-20 11:46:50.862234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:45.111 [2024-11-20 11:46:50.862263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.229 ms 00:32:45.111 [2024-11-20 11:46:50.862278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.111 [2024-11-20 11:46:50.863366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.111 [2024-11-20 11:46:50.863400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:45.111 [2024-11-20 11:46:50.863423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.846 ms 00:32:45.111 [2024-11-20 11:46:50.863438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.378 [2024-11-20 11:46:50.959240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.378 [2024-11-20 11:46:50.959339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:45.378 [2024-11-20 11:46:50.959373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.753 ms 00:32:45.378 [2024-11-20 11:46:50.959390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.378 [2024-11-20 11:46:50.972830] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:32:45.378 [2024-11-20 11:46:51.001072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.378 [2024-11-20 11:46:51.001187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:45.378 [2024-11-20 11:46:51.001219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.452 ms 00:32:45.378 [2024-11-20 11:46:51.001264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.378 [2024-11-20 11:46:51.001493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.378 [2024-11-20 11:46:51.001553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:45.378 [2024-11-20 11:46:51.001604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:32:45.378 [2024-11-20 11:46:51.001626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.378 [2024-11-20 11:46:51.001751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.378 [2024-11-20 11:46:51.001777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:45.378 [2024-11-20 11:46:51.001793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:32:45.378 [2024-11-20 11:46:51.001811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.378 [2024-11-20 11:46:51.001856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.378 [2024-11-20 11:46:51.001879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:45.378 [2024-11-20 11:46:51.001894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:45.378 [2024-11-20 11:46:51.001917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.378 [2024-11-20 11:46:51.001979] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:45.378 [2024-11-20 11:46:51.002011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.378 [2024-11-20 11:46:51.002026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:45.378 [2024-11-20 11:46:51.002053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:32:45.378 [2024-11-20 11:46:51.002067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.378 [2024-11-20 11:46:51.033841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.378 [2024-11-20 11:46:51.033912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:45.378 [2024-11-20 11:46:51.033939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.720 ms 00:32:45.378 [2024-11-20 11:46:51.033955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.378 [2024-11-20 11:46:51.034107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.378 [2024-11-20 11:46:51.034130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:45.378 [2024-11-20 11:46:51.034160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:32:45.378 [2024-11-20 11:46:51.034181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.378 [2024-11-20 11:46:51.035750] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:45.378 [2024-11-20 11:46:51.039588] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 416.567 ms, result 0 00:32:45.378 [2024-11-20 11:46:51.040857] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:45.378 Some configs were skipped because the RPC state that can call them passed over. 00:32:45.378 11:46:51 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:32:45.657 [2024-11-20 11:46:51.302217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.657 [2024-11-20 11:46:51.302325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:32:45.657 [2024-11-20 11:46:51.302363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.753 ms 00:32:45.657 [2024-11-20 11:46:51.302384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.657 [2024-11-20 11:46:51.302453] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.980 ms, result 0 00:32:45.657 true 00:32:45.657 11:46:51 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:32:45.916 [2024-11-20 11:46:51.526198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.916 [2024-11-20 11:46:51.526288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:32:45.916 [2024-11-20 11:46:51.526333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.410 ms 00:32:45.916 [2024-11-20 11:46:51.526359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.916 [2024-11-20 11:46:51.526442] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.650 ms, result 0 00:32:45.916 true 00:32:45.916 11:46:51 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 78908 00:32:45.916 11:46:51 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78908 ']' 00:32:45.916 11:46:51 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78908 00:32:45.916 11:46:51 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:32:45.916 11:46:51 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:45.916 11:46:51 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78908 00:32:45.916 killing process with pid 78908 00:32:45.916 11:46:51 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:45.916 11:46:51 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:45.916 11:46:51 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78908' 00:32:45.916 11:46:51 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78908 00:32:45.916 11:46:51 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78908 00:32:47.294 [2024-11-20 11:46:52.663456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.294 [2024-11-20 11:46:52.663566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:47.294 [2024-11-20 11:46:52.663606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:47.294 [2024-11-20 11:46:52.663621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.294 [2024-11-20 11:46:52.663656] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:32:47.294 [2024-11-20 11:46:52.667464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.294 [2024-11-20 11:46:52.667512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:47.294 [2024-11-20 11:46:52.667542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.780 ms 00:32:47.294 [2024-11-20 11:46:52.667558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.294 [2024-11-20 11:46:52.667883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.294 [2024-11-20 11:46:52.667909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:47.294 [2024-11-20 11:46:52.667927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:32:47.294 [2024-11-20 11:46:52.667941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.294 [2024-11-20 11:46:52.671815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.294 [2024-11-20 11:46:52.671858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:47.294 [2024-11-20 11:46:52.671882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.845 ms 00:32:47.294 [2024-11-20 11:46:52.671894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.294 [2024-11-20 11:46:52.679284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.294 [2024-11-20 11:46:52.679348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:47.294 [2024-11-20 11:46:52.679368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.333 ms 00:32:47.294 [2024-11-20 11:46:52.679382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.294 [2024-11-20 11:46:52.691954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.294 [2024-11-20 11:46:52.692029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:47.294 [2024-11-20 11:46:52.692057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.483 ms 00:32:47.294 [2024-11-20 11:46:52.692089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.294 [2024-11-20 11:46:52.701654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.294 [2024-11-20 11:46:52.701721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:47.294 [2024-11-20 11:46:52.701747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.504 ms 00:32:47.294 [2024-11-20 11:46:52.701760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.294 [2024-11-20 11:46:52.701925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.294 [2024-11-20 11:46:52.701946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:47.294 [2024-11-20 11:46:52.701974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:32:47.294 [2024-11-20 11:46:52.701987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.294 [2024-11-20 11:46:52.714040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.294 [2024-11-20 11:46:52.714090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:47.294 [2024-11-20 11:46:52.714126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.022 ms 00:32:47.294 [2024-11-20 11:46:52.714138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.294 [2024-11-20 11:46:52.725707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.294 [2024-11-20 11:46:52.725776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:47.294 [2024-11-20 11:46:52.725810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.498 ms 00:32:47.294 [2024-11-20 11:46:52.725824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.294 [2024-11-20 11:46:52.737321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.294 [2024-11-20 11:46:52.737393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:47.294 [2024-11-20 11:46:52.737426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.427 ms 00:32:47.294 [2024-11-20 11:46:52.737443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.294 [2024-11-20 11:46:52.748714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.294 [2024-11-20 11:46:52.748763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:47.294 [2024-11-20 11:46:52.748786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.150 ms 00:32:47.294 [2024-11-20 11:46:52.748799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.294 [2024-11-20 11:46:52.748853] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:47.294 [2024-11-20 11:46:52.748880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:47.294 [2024-11-20 11:46:52.748902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:47.294 [2024-11-20 11:46:52.748916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:47.294 [2024-11-20 11:46:52.748935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:47.294 [2024-11-20 11:46:52.748948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:47.294 [2024-11-20 11:46:52.748973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:47.294 [2024-11-20 11:46:52.748988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:47.294 [2024-11-20 11:46:52.749007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:47.294 [2024-11-20 11:46:52.749025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:47.294 [2024-11-20 11:46:52.749044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:47.294 [2024-11-20 11:46:52.749057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:47.294 [2024-11-20 11:46:52.749076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.749982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.750000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.750014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.750032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.750046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.750064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.750078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.750096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.750125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.750145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.750159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.750179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.750192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.750209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.750222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.750237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.750250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.750265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.750277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.750294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.750306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.750321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.750337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:47.295 [2024-11-20 11:46:52.750351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:47.296 [2024-11-20 11:46:52.750363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:47.296 [2024-11-20 11:46:52.750379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:47.296 [2024-11-20 11:46:52.750391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:47.296 [2024-11-20 11:46:52.750408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:47.296 [2024-11-20 11:46:52.750421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:47.296 [2024-11-20 11:46:52.750436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:47.296 [2024-11-20 11:46:52.750448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:47.296 [2024-11-20 11:46:52.750463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:47.296 [2024-11-20 11:46:52.750475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:47.296 [2024-11-20 11:46:52.750489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:47.296 [2024-11-20 11:46:52.750502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:47.296 [2024-11-20 11:46:52.750516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:47.296 [2024-11-20 11:46:52.750544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:47.296 [2024-11-20 11:46:52.750572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:47.296 [2024-11-20 11:46:52.750585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:47.296 [2024-11-20 11:46:52.750600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:47.296 [2024-11-20 11:46:52.750613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:47.296 [2024-11-20 11:46:52.750644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:47.296 [2024-11-20 11:46:52.750666] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:47.296 [2024-11-20 11:46:52.750690] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e3116d21-5f36-46d4-8ab1-bab032ddcd4c 00:32:47.296 [2024-11-20 11:46:52.750718] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:47.296 [2024-11-20 11:46:52.750738] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:47.296 [2024-11-20 11:46:52.750750] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:47.296 [2024-11-20 11:46:52.750764] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:47.296 [2024-11-20 11:46:52.750776] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:47.296 [2024-11-20 11:46:52.750790] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:47.296 [2024-11-20 11:46:52.750801] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:47.296 [2024-11-20 11:46:52.750815] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:47.296 [2024-11-20 11:46:52.750825] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:47.296 [2024-11-20 11:46:52.750840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.296 [2024-11-20 11:46:52.750852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:47.296 [2024-11-20 11:46:52.750867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.995 ms 00:32:47.296 [2024-11-20 11:46:52.750879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.296 [2024-11-20 11:46:52.767298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.296 [2024-11-20 11:46:52.767363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:47.296 [2024-11-20 11:46:52.767387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.367 ms 00:32:47.296 [2024-11-20 11:46:52.767401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.296 [2024-11-20 11:46:52.767956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.296 [2024-11-20 11:46:52.767987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:47.296 [2024-11-20 11:46:52.768006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.466 ms 00:32:47.296 [2024-11-20 11:46:52.768022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.296 [2024-11-20 11:46:52.827111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:47.296 [2024-11-20 11:46:52.827167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:47.296 [2024-11-20 11:46:52.827192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:47.296 [2024-11-20 11:46:52.827206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.296 [2024-11-20 11:46:52.827351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:47.296 [2024-11-20 11:46:52.827371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:47.296 [2024-11-20 11:46:52.827390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:47.296 [2024-11-20 11:46:52.827410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.296 [2024-11-20 11:46:52.827490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:47.296 [2024-11-20 11:46:52.827510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:47.296 [2024-11-20 11:46:52.827548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:47.296 [2024-11-20 11:46:52.827565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.296 [2024-11-20 11:46:52.827601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:47.296 [2024-11-20 11:46:52.827619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:47.296 [2024-11-20 11:46:52.827638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:47.296 [2024-11-20 11:46:52.827651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.296 [2024-11-20 11:46:52.930431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:47.296 [2024-11-20 11:46:52.930525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:47.296 [2024-11-20 11:46:52.930560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:47.296 [2024-11-20 11:46:52.930577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.296 [2024-11-20 11:46:53.014084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:47.296 [2024-11-20 11:46:53.014187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:47.296 [2024-11-20 11:46:53.014213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:47.296 [2024-11-20 11:46:53.014234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.296 [2024-11-20 11:46:53.014369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:47.296 [2024-11-20 11:46:53.014390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:47.296 [2024-11-20 11:46:53.014418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:47.296 [2024-11-20 11:46:53.014433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.296 [2024-11-20 11:46:53.014483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:47.296 [2024-11-20 11:46:53.014499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:47.296 [2024-11-20 11:46:53.014534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:47.296 [2024-11-20 11:46:53.014564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.296 [2024-11-20 11:46:53.014736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:47.296 [2024-11-20 11:46:53.014755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:47.296 [2024-11-20 11:46:53.014775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:47.296 [2024-11-20 11:46:53.014787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.296 [2024-11-20 11:46:53.014852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:47.296 [2024-11-20 11:46:53.014871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:47.296 [2024-11-20 11:46:53.014890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:47.296 [2024-11-20 11:46:53.014903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.297 [2024-11-20 11:46:53.014967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:47.297 [2024-11-20 11:46:53.015000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:47.297 [2024-11-20 11:46:53.015024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:47.297 [2024-11-20 11:46:53.015038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.297 [2024-11-20 11:46:53.015109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:47.297 [2024-11-20 11:46:53.015126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:47.297 [2024-11-20 11:46:53.015145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:47.297 [2024-11-20 11:46:53.015158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.297 [2024-11-20 11:46:53.015383] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 351.866 ms, result 0 00:32:48.672 11:46:54 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:48.672 [2024-11-20 11:46:54.140921] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:32:48.672 [2024-11-20 11:46:54.141097] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78976 ] 00:32:48.672 [2024-11-20 11:46:54.332953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:48.930 [2024-11-20 11:46:54.501062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:49.189 [2024-11-20 11:46:54.883180] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:49.189 [2024-11-20 11:46:54.883268] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:49.449 [2024-11-20 11:46:55.050135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.449 [2024-11-20 11:46:55.050208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:49.449 [2024-11-20 11:46:55.050236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:49.449 [2024-11-20 11:46:55.050249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.449 [2024-11-20 11:46:55.054123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.449 [2024-11-20 11:46:55.054175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:49.449 [2024-11-20 11:46:55.054191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.828 ms 00:32:49.449 [2024-11-20 11:46:55.054202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.449 [2024-11-20 11:46:55.054352] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:49.449 [2024-11-20 11:46:55.055226] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:49.449 [2024-11-20 11:46:55.055277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.449 [2024-11-20 11:46:55.055291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:49.449 [2024-11-20 11:46:55.055320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.935 ms 00:32:49.449 [2024-11-20 11:46:55.055332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.449 [2024-11-20 11:46:55.057948] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:49.449 [2024-11-20 11:46:55.074675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.449 [2024-11-20 11:46:55.074746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:49.449 [2024-11-20 11:46:55.074764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.729 ms 00:32:49.449 [2024-11-20 11:46:55.074776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.449 [2024-11-20 11:46:55.074890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.449 [2024-11-20 11:46:55.074911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:49.449 [2024-11-20 11:46:55.074924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:32:49.449 [2024-11-20 11:46:55.074936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.449 [2024-11-20 11:46:55.087134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.449 [2024-11-20 11:46:55.087184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:49.449 [2024-11-20 11:46:55.087200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.133 ms 00:32:49.449 [2024-11-20 11:46:55.087212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.449 [2024-11-20 11:46:55.087399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.449 [2024-11-20 11:46:55.087423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:49.449 [2024-11-20 11:46:55.087449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:32:49.449 [2024-11-20 11:46:55.087461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.449 [2024-11-20 11:46:55.087523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.449 [2024-11-20 11:46:55.087545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:49.449 [2024-11-20 11:46:55.087559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:32:49.449 [2024-11-20 11:46:55.087594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.449 [2024-11-20 11:46:55.087634] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:32:49.449 [2024-11-20 11:46:55.093144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.450 [2024-11-20 11:46:55.093179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:49.450 [2024-11-20 11:46:55.093194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.519 ms 00:32:49.450 [2024-11-20 11:46:55.093206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.450 [2024-11-20 11:46:55.093298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.450 [2024-11-20 11:46:55.093318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:49.450 [2024-11-20 11:46:55.093336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:32:49.450 [2024-11-20 11:46:55.093350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.450 [2024-11-20 11:46:55.093386] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:49.450 [2024-11-20 11:46:55.093424] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:49.450 [2024-11-20 11:46:55.093471] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:49.450 [2024-11-20 11:46:55.093495] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:49.450 [2024-11-20 11:46:55.093650] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:49.450 [2024-11-20 11:46:55.093683] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:49.450 [2024-11-20 11:46:55.093699] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:49.450 [2024-11-20 11:46:55.093713] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:49.450 [2024-11-20 11:46:55.093733] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:49.450 [2024-11-20 11:46:55.093745] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:32:49.450 [2024-11-20 11:46:55.093757] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:49.450 [2024-11-20 11:46:55.093768] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:49.450 [2024-11-20 11:46:55.093780] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:49.450 [2024-11-20 11:46:55.093792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.450 [2024-11-20 11:46:55.093803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:49.450 [2024-11-20 11:46:55.093815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:32:49.450 [2024-11-20 11:46:55.093827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.450 [2024-11-20 11:46:55.093920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.450 [2024-11-20 11:46:55.093935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:49.450 [2024-11-20 11:46:55.093953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:32:49.450 [2024-11-20 11:46:55.093964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.450 [2024-11-20 11:46:55.094076] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:49.450 [2024-11-20 11:46:55.094094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:49.450 [2024-11-20 11:46:55.094106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:49.450 [2024-11-20 11:46:55.094117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:49.450 [2024-11-20 11:46:55.094129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:49.450 [2024-11-20 11:46:55.094139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:49.450 [2024-11-20 11:46:55.094149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:32:49.450 [2024-11-20 11:46:55.094159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:49.450 [2024-11-20 11:46:55.094168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:32:49.450 [2024-11-20 11:46:55.094178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:49.450 [2024-11-20 11:46:55.094187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:49.450 [2024-11-20 11:46:55.094197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:32:49.450 [2024-11-20 11:46:55.094206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:49.450 [2024-11-20 11:46:55.094231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:49.450 [2024-11-20 11:46:55.094242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:32:49.450 [2024-11-20 11:46:55.094251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:49.450 [2024-11-20 11:46:55.094261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:49.450 [2024-11-20 11:46:55.094270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:32:49.450 [2024-11-20 11:46:55.094280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:49.450 [2024-11-20 11:46:55.094289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:49.450 [2024-11-20 11:46:55.094298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:32:49.450 [2024-11-20 11:46:55.094324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:49.450 [2024-11-20 11:46:55.094334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:49.450 [2024-11-20 11:46:55.094344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:32:49.450 [2024-11-20 11:46:55.094353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:49.450 [2024-11-20 11:46:55.094364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:49.450 [2024-11-20 11:46:55.094374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:32:49.450 [2024-11-20 11:46:55.094385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:49.450 [2024-11-20 11:46:55.094394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:49.450 [2024-11-20 11:46:55.094404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:32:49.450 [2024-11-20 11:46:55.094413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:49.450 [2024-11-20 11:46:55.094423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:49.450 [2024-11-20 11:46:55.094434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:32:49.450 [2024-11-20 11:46:55.094444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:49.450 [2024-11-20 11:46:55.094453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:49.450 [2024-11-20 11:46:55.094463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:32:49.450 [2024-11-20 11:46:55.094473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:49.450 [2024-11-20 11:46:55.094484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:49.450 [2024-11-20 11:46:55.094494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:32:49.450 [2024-11-20 11:46:55.094504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:49.450 [2024-11-20 11:46:55.094513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:49.450 [2024-11-20 11:46:55.094523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:32:49.450 [2024-11-20 11:46:55.094533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:49.450 [2024-11-20 11:46:55.094542] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:49.450 [2024-11-20 11:46:55.094553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:49.450 [2024-11-20 11:46:55.094564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:49.450 [2024-11-20 11:46:55.094581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:49.450 [2024-11-20 11:46:55.094612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:49.450 [2024-11-20 11:46:55.094626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:49.450 [2024-11-20 11:46:55.094636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:49.450 [2024-11-20 11:46:55.094646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:49.450 [2024-11-20 11:46:55.094656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:49.450 [2024-11-20 11:46:55.094666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:49.450 [2024-11-20 11:46:55.094678] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:49.450 [2024-11-20 11:46:55.094707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:49.450 [2024-11-20 11:46:55.094720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:32:49.450 [2024-11-20 11:46:55.094731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:32:49.450 [2024-11-20 11:46:55.094743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:32:49.450 [2024-11-20 11:46:55.094754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:32:49.450 [2024-11-20 11:46:55.094764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:32:49.450 [2024-11-20 11:46:55.094775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:32:49.450 [2024-11-20 11:46:55.094786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:32:49.450 [2024-11-20 11:46:55.094796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:32:49.450 [2024-11-20 11:46:55.094807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:32:49.450 [2024-11-20 11:46:55.094818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:32:49.450 [2024-11-20 11:46:55.094828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:32:49.450 [2024-11-20 11:46:55.094839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:32:49.450 [2024-11-20 11:46:55.094850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:32:49.450 [2024-11-20 11:46:55.094861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:32:49.450 [2024-11-20 11:46:55.094872] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:49.450 [2024-11-20 11:46:55.094884] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:49.451 [2024-11-20 11:46:55.094897] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:49.451 [2024-11-20 11:46:55.094907] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:49.451 [2024-11-20 11:46:55.094918] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:49.451 [2024-11-20 11:46:55.094928] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:49.451 [2024-11-20 11:46:55.094939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.451 [2024-11-20 11:46:55.094950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:49.451 [2024-11-20 11:46:55.094967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.924 ms 00:32:49.451 [2024-11-20 11:46:55.094978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.451 [2024-11-20 11:46:55.140791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.451 [2024-11-20 11:46:55.140875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:49.451 [2024-11-20 11:46:55.140896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.734 ms 00:32:49.451 [2024-11-20 11:46:55.140909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.451 [2024-11-20 11:46:55.141162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.451 [2024-11-20 11:46:55.141192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:49.451 [2024-11-20 11:46:55.141207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:32:49.451 [2024-11-20 11:46:55.141219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.451 [2024-11-20 11:46:55.207025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.451 [2024-11-20 11:46:55.207107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:49.451 [2024-11-20 11:46:55.207177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.741 ms 00:32:49.451 [2024-11-20 11:46:55.207202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.451 [2024-11-20 11:46:55.207407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.451 [2024-11-20 11:46:55.207431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:49.451 [2024-11-20 11:46:55.207448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:49.451 [2024-11-20 11:46:55.207461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.451 [2024-11-20 11:46:55.208330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.451 [2024-11-20 11:46:55.208379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:49.451 [2024-11-20 11:46:55.208395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.833 ms 00:32:49.451 [2024-11-20 11:46:55.208417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.451 [2024-11-20 11:46:55.208690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.451 [2024-11-20 11:46:55.208729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:49.451 [2024-11-20 11:46:55.208744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:32:49.451 [2024-11-20 11:46:55.208756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.710 [2024-11-20 11:46:55.232922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.710 [2024-11-20 11:46:55.232977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:49.710 [2024-11-20 11:46:55.233000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.134 ms 00:32:49.710 [2024-11-20 11:46:55.233013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.710 [2024-11-20 11:46:55.250234] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:49.710 [2024-11-20 11:46:55.250289] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:49.710 [2024-11-20 11:46:55.250307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.710 [2024-11-20 11:46:55.250321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:49.710 [2024-11-20 11:46:55.250334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.127 ms 00:32:49.710 [2024-11-20 11:46:55.250346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.710 [2024-11-20 11:46:55.278618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.710 [2024-11-20 11:46:55.278682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:49.710 [2024-11-20 11:46:55.278705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.142 ms 00:32:49.710 [2024-11-20 11:46:55.278717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.710 [2024-11-20 11:46:55.293492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.710 [2024-11-20 11:46:55.293543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:49.710 [2024-11-20 11:46:55.293561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.683 ms 00:32:49.711 [2024-11-20 11:46:55.293589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.711 [2024-11-20 11:46:55.309178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.711 [2024-11-20 11:46:55.309246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:49.711 [2024-11-20 11:46:55.309265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.474 ms 00:32:49.711 [2024-11-20 11:46:55.309277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.711 [2024-11-20 11:46:55.310216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.711 [2024-11-20 11:46:55.310267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:49.711 [2024-11-20 11:46:55.310283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.800 ms 00:32:49.711 [2024-11-20 11:46:55.310296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.711 [2024-11-20 11:46:55.391559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.711 [2024-11-20 11:46:55.391672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:49.711 [2024-11-20 11:46:55.391708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.217 ms 00:32:49.711 [2024-11-20 11:46:55.391721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.711 [2024-11-20 11:46:55.403754] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:32:49.711 [2024-11-20 11:46:55.427852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.711 [2024-11-20 11:46:55.427924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:49.711 [2024-11-20 11:46:55.427945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.983 ms 00:32:49.711 [2024-11-20 11:46:55.427959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.711 [2024-11-20 11:46:55.428111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.711 [2024-11-20 11:46:55.428132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:49.711 [2024-11-20 11:46:55.428146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:49.711 [2024-11-20 11:46:55.428163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.711 [2024-11-20 11:46:55.428258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.711 [2024-11-20 11:46:55.428290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:49.711 [2024-11-20 11:46:55.428320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:32:49.711 [2024-11-20 11:46:55.428332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.711 [2024-11-20 11:46:55.428376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.711 [2024-11-20 11:46:55.428397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:49.711 [2024-11-20 11:46:55.428409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:49.711 [2024-11-20 11:46:55.428420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.711 [2024-11-20 11:46:55.428471] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:49.711 [2024-11-20 11:46:55.428490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.711 [2024-11-20 11:46:55.428502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:49.711 [2024-11-20 11:46:55.428514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:32:49.711 [2024-11-20 11:46:55.428526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.711 [2024-11-20 11:46:55.458680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.711 [2024-11-20 11:46:55.458748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:49.711 [2024-11-20 11:46:55.458766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.106 ms 00:32:49.711 [2024-11-20 11:46:55.458778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.711 [2024-11-20 11:46:55.458913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.711 [2024-11-20 11:46:55.458934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:49.711 [2024-11-20 11:46:55.458946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:32:49.711 [2024-11-20 11:46:55.458958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.711 [2024-11-20 11:46:55.460378] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:49.711 [2024-11-20 11:46:55.464175] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 409.860 ms, result 0 00:32:49.711 [2024-11-20 11:46:55.465019] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:49.969 [2024-11-20 11:46:55.480921] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:50.903  [2024-11-20T11:46:57.603Z] Copying: 27/256 [MB] (27 MBps) [2024-11-20T11:46:58.538Z] Copying: 52/256 [MB] (24 MBps) [2024-11-20T11:46:59.913Z] Copying: 76/256 [MB] (24 MBps) [2024-11-20T11:47:00.847Z] Copying: 100/256 [MB] (24 MBps) [2024-11-20T11:47:01.780Z] Copying: 124/256 [MB] (24 MBps) [2024-11-20T11:47:02.713Z] Copying: 149/256 [MB] (24 MBps) [2024-11-20T11:47:03.648Z] Copying: 171/256 [MB] (22 MBps) [2024-11-20T11:47:04.585Z] Copying: 192/256 [MB] (21 MBps) [2024-11-20T11:47:05.960Z] Copying: 215/256 [MB] (22 MBps) [2024-11-20T11:47:06.526Z] Copying: 238/256 [MB] (22 MBps) [2024-11-20T11:47:06.784Z] Copying: 256/256 [MB] (average 23 MBps)[2024-11-20 11:47:06.714070] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:01.018 [2024-11-20 11:47:06.732916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.019 [2024-11-20 11:47:06.732969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:01.019 [2024-11-20 11:47:06.732990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:01.019 [2024-11-20 11:47:06.733014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.019 [2024-11-20 11:47:06.733050] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:33:01.019 [2024-11-20 11:47:06.736772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.019 [2024-11-20 11:47:06.736808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:01.019 [2024-11-20 11:47:06.736825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.699 ms 00:33:01.019 [2024-11-20 11:47:06.736837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.019 [2024-11-20 11:47:06.737151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.019 [2024-11-20 11:47:06.737176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:01.019 [2024-11-20 11:47:06.737191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:33:01.019 [2024-11-20 11:47:06.737202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.019 [2024-11-20 11:47:06.740862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.019 [2024-11-20 11:47:06.740901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:33:01.019 [2024-11-20 11:47:06.740916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.635 ms 00:33:01.019 [2024-11-20 11:47:06.740928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.019 [2024-11-20 11:47:06.748681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.019 [2024-11-20 11:47:06.748718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:33:01.019 [2024-11-20 11:47:06.748732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.727 ms 00:33:01.019 [2024-11-20 11:47:06.748743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.019 [2024-11-20 11:47:06.780109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.019 [2024-11-20 11:47:06.780157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:33:01.019 [2024-11-20 11:47:06.780173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.293 ms 00:33:01.019 [2024-11-20 11:47:06.780184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.278 [2024-11-20 11:47:06.796409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.278 [2024-11-20 11:47:06.796459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:33:01.278 [2024-11-20 11:47:06.796476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.161 ms 00:33:01.278 [2024-11-20 11:47:06.796492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.278 [2024-11-20 11:47:06.796653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.278 [2024-11-20 11:47:06.796673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:33:01.278 [2024-11-20 11:47:06.796686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:33:01.278 [2024-11-20 11:47:06.796696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.278 [2024-11-20 11:47:06.824538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.279 [2024-11-20 11:47:06.824775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:33:01.279 [2024-11-20 11:47:06.824814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.806 ms 00:33:01.279 [2024-11-20 11:47:06.824828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.279 [2024-11-20 11:47:06.852282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.279 [2024-11-20 11:47:06.852324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:33:01.279 [2024-11-20 11:47:06.852355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.384 ms 00:33:01.279 [2024-11-20 11:47:06.852365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.279 [2024-11-20 11:47:06.879536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.279 [2024-11-20 11:47:06.879586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:33:01.279 [2024-11-20 11:47:06.879602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.109 ms 00:33:01.279 [2024-11-20 11:47:06.879612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.279 [2024-11-20 11:47:06.908799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.279 [2024-11-20 11:47:06.908840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:33:01.279 [2024-11-20 11:47:06.908856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.079 ms 00:33:01.279 [2024-11-20 11:47:06.908866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.279 [2024-11-20 11:47:06.908927] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:01.279 [2024-11-20 11:47:06.908950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.908963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.908974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.908984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.908994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:01.279 [2024-11-20 11:47:06.909817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:01.280 [2024-11-20 11:47:06.909829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:01.280 [2024-11-20 11:47:06.909841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:01.280 [2024-11-20 11:47:06.909852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:01.280 [2024-11-20 11:47:06.909862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:01.280 [2024-11-20 11:47:06.909873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:01.280 [2024-11-20 11:47:06.909884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:01.280 [2024-11-20 11:47:06.909895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:01.280 [2024-11-20 11:47:06.909905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:01.280 [2024-11-20 11:47:06.909916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:01.280 [2024-11-20 11:47:06.909926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:01.280 [2024-11-20 11:47:06.909937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:01.280 [2024-11-20 11:47:06.909947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:01.280 [2024-11-20 11:47:06.909957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:01.280 [2024-11-20 11:47:06.909968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:01.280 [2024-11-20 11:47:06.909978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:01.280 [2024-11-20 11:47:06.909989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:01.280 [2024-11-20 11:47:06.909999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:01.280 [2024-11-20 11:47:06.910025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:01.280 [2024-11-20 11:47:06.910051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:01.280 [2024-11-20 11:47:06.910078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:01.280 [2024-11-20 11:47:06.910089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:01.280 [2024-11-20 11:47:06.910100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:01.280 [2024-11-20 11:47:06.910110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:01.280 [2024-11-20 11:47:06.910121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:01.280 [2024-11-20 11:47:06.910132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:01.280 [2024-11-20 11:47:06.910158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:01.280 [2024-11-20 11:47:06.910170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:01.280 [2024-11-20 11:47:06.910180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:01.280 [2024-11-20 11:47:06.910191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:01.280 [2024-11-20 11:47:06.910217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:01.280 [2024-11-20 11:47:06.910229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:01.280 [2024-11-20 11:47:06.910241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:01.280 [2024-11-20 11:47:06.910252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:01.280 [2024-11-20 11:47:06.910264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:01.280 [2024-11-20 11:47:06.910285] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:01.280 [2024-11-20 11:47:06.910296] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e3116d21-5f36-46d4-8ab1-bab032ddcd4c 00:33:01.280 [2024-11-20 11:47:06.910332] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:01.280 [2024-11-20 11:47:06.910343] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:33:01.280 [2024-11-20 11:47:06.910354] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:01.280 [2024-11-20 11:47:06.910365] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:01.280 [2024-11-20 11:47:06.910376] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:01.280 [2024-11-20 11:47:06.910388] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:01.280 [2024-11-20 11:47:06.910399] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:01.280 [2024-11-20 11:47:06.910409] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:01.280 [2024-11-20 11:47:06.910419] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:01.280 [2024-11-20 11:47:06.910430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.280 [2024-11-20 11:47:06.910448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:01.280 [2024-11-20 11:47:06.910460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.505 ms 00:33:01.280 [2024-11-20 11:47:06.910472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.280 [2024-11-20 11:47:06.927059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.280 [2024-11-20 11:47:06.927096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:01.280 [2024-11-20 11:47:06.927112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.561 ms 00:33:01.280 [2024-11-20 11:47:06.927123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.280 [2024-11-20 11:47:06.927660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.280 [2024-11-20 11:47:06.927692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:01.280 [2024-11-20 11:47:06.927708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.473 ms 00:33:01.280 [2024-11-20 11:47:06.927719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.280 [2024-11-20 11:47:06.971103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:01.280 [2024-11-20 11:47:06.971149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:01.280 [2024-11-20 11:47:06.971165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:01.280 [2024-11-20 11:47:06.971175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.280 [2024-11-20 11:47:06.971270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:01.280 [2024-11-20 11:47:06.971287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:01.280 [2024-11-20 11:47:06.971298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:01.280 [2024-11-20 11:47:06.971308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.280 [2024-11-20 11:47:06.971381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:01.280 [2024-11-20 11:47:06.971398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:01.280 [2024-11-20 11:47:06.971410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:01.280 [2024-11-20 11:47:06.971420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.280 [2024-11-20 11:47:06.971444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:01.280 [2024-11-20 11:47:06.971464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:01.280 [2024-11-20 11:47:06.971476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:01.280 [2024-11-20 11:47:06.971486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.539 [2024-11-20 11:47:07.064177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:01.539 [2024-11-20 11:47:07.064254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:01.539 [2024-11-20 11:47:07.064272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:01.539 [2024-11-20 11:47:07.064283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.539 [2024-11-20 11:47:07.140837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:01.539 [2024-11-20 11:47:07.140898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:01.539 [2024-11-20 11:47:07.140915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:01.539 [2024-11-20 11:47:07.140927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.539 [2024-11-20 11:47:07.140994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:01.539 [2024-11-20 11:47:07.141010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:01.539 [2024-11-20 11:47:07.141021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:01.539 [2024-11-20 11:47:07.141031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.539 [2024-11-20 11:47:07.141065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:01.539 [2024-11-20 11:47:07.141078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:01.539 [2024-11-20 11:47:07.141095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:01.539 [2024-11-20 11:47:07.141105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.539 [2024-11-20 11:47:07.141219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:01.539 [2024-11-20 11:47:07.141247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:01.539 [2024-11-20 11:47:07.141261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:01.539 [2024-11-20 11:47:07.141271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.539 [2024-11-20 11:47:07.141339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:01.539 [2024-11-20 11:47:07.141357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:01.540 [2024-11-20 11:47:07.141369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:01.540 [2024-11-20 11:47:07.141385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.540 [2024-11-20 11:47:07.141434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:01.540 [2024-11-20 11:47:07.141449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:01.540 [2024-11-20 11:47:07.141459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:01.540 [2024-11-20 11:47:07.141470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.540 [2024-11-20 11:47:07.141522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:01.540 [2024-11-20 11:47:07.141537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:01.540 [2024-11-20 11:47:07.141618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:01.540 [2024-11-20 11:47:07.141632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.540 [2024-11-20 11:47:07.141819] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 408.911 ms, result 0 00:33:02.523 00:33:02.523 00:33:02.523 11:47:08 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:03.089 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:33:03.090 11:47:08 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:33:03.090 11:47:08 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:33:03.090 11:47:08 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:03.090 11:47:08 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:03.090 11:47:08 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:33:03.090 11:47:08 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:33:03.090 11:47:08 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 78908 00:33:03.090 11:47:08 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78908 ']' 00:33:03.090 11:47:08 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78908 00:33:03.090 Process with pid 78908 is not found 00:33:03.090 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78908) - No such process 00:33:03.090 11:47:08 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 78908 is not found' 00:33:03.090 ************************************ 00:33:03.090 END TEST ftl_trim 00:33:03.090 ************************************ 00:33:03.090 00:33:03.090 real 1m12.787s 00:33:03.090 user 1m40.091s 00:33:03.090 sys 0m7.929s 00:33:03.090 11:47:08 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:03.090 11:47:08 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:33:03.090 11:47:08 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:33:03.090 11:47:08 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:33:03.090 11:47:08 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:03.090 11:47:08 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:03.090 ************************************ 00:33:03.090 START TEST ftl_restore 00:33:03.090 ************************************ 00:33:03.090 11:47:08 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:33:03.090 * Looking for test storage... 00:33:03.090 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:33:03.090 11:47:08 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:03.090 11:47:08 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:33:03.090 11:47:08 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:03.349 11:47:08 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:03.349 11:47:08 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:03.349 11:47:08 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:03.349 11:47:08 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:03.349 11:47:08 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:33:03.349 11:47:08 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:33:03.349 11:47:08 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:33:03.349 11:47:08 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:33:03.349 11:47:08 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:33:03.349 11:47:08 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:33:03.349 11:47:08 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:33:03.349 11:47:08 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:03.349 11:47:08 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:33:03.349 11:47:08 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:33:03.349 11:47:08 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:03.349 11:47:08 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:03.349 11:47:08 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:33:03.349 11:47:08 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:33:03.349 11:47:08 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:03.349 11:47:08 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:33:03.349 11:47:08 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:33:03.349 11:47:08 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:33:03.349 11:47:08 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:33:03.349 11:47:08 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:03.349 11:47:08 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:33:03.349 11:47:08 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:33:03.349 11:47:08 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:03.349 11:47:08 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:03.349 11:47:08 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:33:03.349 11:47:08 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:03.349 11:47:08 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:03.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:03.349 --rc genhtml_branch_coverage=1 00:33:03.349 --rc genhtml_function_coverage=1 00:33:03.349 --rc genhtml_legend=1 00:33:03.349 --rc geninfo_all_blocks=1 00:33:03.349 --rc geninfo_unexecuted_blocks=1 00:33:03.349 00:33:03.349 ' 00:33:03.349 11:47:08 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:03.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:03.349 --rc genhtml_branch_coverage=1 00:33:03.349 --rc genhtml_function_coverage=1 00:33:03.349 --rc genhtml_legend=1 00:33:03.349 --rc geninfo_all_blocks=1 00:33:03.349 --rc geninfo_unexecuted_blocks=1 00:33:03.349 00:33:03.349 ' 00:33:03.349 11:47:08 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:03.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:03.349 --rc genhtml_branch_coverage=1 00:33:03.349 --rc genhtml_function_coverage=1 00:33:03.349 --rc genhtml_legend=1 00:33:03.349 --rc geninfo_all_blocks=1 00:33:03.349 --rc geninfo_unexecuted_blocks=1 00:33:03.349 00:33:03.349 ' 00:33:03.349 11:47:08 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:03.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:03.349 --rc genhtml_branch_coverage=1 00:33:03.349 --rc genhtml_function_coverage=1 00:33:03.349 --rc genhtml_legend=1 00:33:03.349 --rc geninfo_all_blocks=1 00:33:03.349 --rc geninfo_unexecuted_blocks=1 00:33:03.349 00:33:03.349 ' 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.vqxV58kgcz 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79191 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:03.349 11:47:08 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79191 00:33:03.349 11:47:08 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 79191 ']' 00:33:03.350 11:47:08 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:03.350 11:47:08 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:03.350 11:47:08 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:03.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:03.350 11:47:08 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:03.350 11:47:08 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:33:03.350 [2024-11-20 11:47:09.082399] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:33:03.350 [2024-11-20 11:47:09.082809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79191 ] 00:33:03.612 [2024-11-20 11:47:09.266019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:03.869 [2024-11-20 11:47:09.378790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:04.435 11:47:10 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:04.435 11:47:10 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:33:04.435 11:47:10 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:33:04.435 11:47:10 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:33:04.435 11:47:10 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:33:04.435 11:47:10 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:33:04.436 11:47:10 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:33:04.436 11:47:10 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:05.003 11:47:10 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:33:05.003 11:47:10 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:33:05.003 11:47:10 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:33:05.003 11:47:10 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:33:05.003 11:47:10 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:33:05.003 11:47:10 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:33:05.003 11:47:10 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:33:05.003 11:47:10 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:33:05.262 11:47:10 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:33:05.262 { 00:33:05.262 "name": "nvme0n1", 00:33:05.262 "aliases": [ 00:33:05.262 "de38a523-6b2a-4739-be01-c7da87d3ea23" 00:33:05.262 ], 00:33:05.262 "product_name": "NVMe disk", 00:33:05.262 "block_size": 4096, 00:33:05.262 "num_blocks": 1310720, 00:33:05.262 "uuid": "de38a523-6b2a-4739-be01-c7da87d3ea23", 00:33:05.262 "numa_id": -1, 00:33:05.262 "assigned_rate_limits": { 00:33:05.262 "rw_ios_per_sec": 0, 00:33:05.262 "rw_mbytes_per_sec": 0, 00:33:05.262 "r_mbytes_per_sec": 0, 00:33:05.262 "w_mbytes_per_sec": 0 00:33:05.262 }, 00:33:05.262 "claimed": true, 00:33:05.262 "claim_type": "read_many_write_one", 00:33:05.262 "zoned": false, 00:33:05.262 "supported_io_types": { 00:33:05.262 "read": true, 00:33:05.262 "write": true, 00:33:05.262 "unmap": true, 00:33:05.262 "flush": true, 00:33:05.262 "reset": true, 00:33:05.262 "nvme_admin": true, 00:33:05.262 "nvme_io": true, 00:33:05.262 "nvme_io_md": false, 00:33:05.262 "write_zeroes": true, 00:33:05.262 "zcopy": false, 00:33:05.262 "get_zone_info": false, 00:33:05.262 "zone_management": false, 00:33:05.262 "zone_append": false, 00:33:05.262 "compare": true, 00:33:05.262 "compare_and_write": false, 00:33:05.262 "abort": true, 00:33:05.262 "seek_hole": false, 00:33:05.262 "seek_data": false, 00:33:05.262 "copy": true, 00:33:05.262 "nvme_iov_md": false 00:33:05.262 }, 00:33:05.262 "driver_specific": { 00:33:05.262 "nvme": [ 00:33:05.262 { 00:33:05.262 "pci_address": "0000:00:11.0", 00:33:05.262 "trid": { 00:33:05.262 "trtype": "PCIe", 00:33:05.262 "traddr": "0000:00:11.0" 00:33:05.262 }, 00:33:05.262 "ctrlr_data": { 00:33:05.262 "cntlid": 0, 00:33:05.262 "vendor_id": "0x1b36", 00:33:05.262 "model_number": "QEMU NVMe Ctrl", 00:33:05.262 "serial_number": "12341", 00:33:05.262 "firmware_revision": "8.0.0", 00:33:05.262 "subnqn": "nqn.2019-08.org.qemu:12341", 00:33:05.262 "oacs": { 00:33:05.262 "security": 0, 00:33:05.262 "format": 1, 00:33:05.262 "firmware": 0, 00:33:05.262 "ns_manage": 1 00:33:05.262 }, 00:33:05.262 "multi_ctrlr": false, 00:33:05.262 "ana_reporting": false 00:33:05.262 }, 00:33:05.262 "vs": { 00:33:05.262 "nvme_version": "1.4" 00:33:05.262 }, 00:33:05.262 "ns_data": { 00:33:05.262 "id": 1, 00:33:05.262 "can_share": false 00:33:05.262 } 00:33:05.262 } 00:33:05.262 ], 00:33:05.262 "mp_policy": "active_passive" 00:33:05.262 } 00:33:05.262 } 00:33:05.262 ]' 00:33:05.262 11:47:10 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:33:05.262 11:47:10 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:33:05.262 11:47:10 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:33:05.262 11:47:10 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:33:05.262 11:47:10 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:33:05.262 11:47:10 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:33:05.262 11:47:10 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:33:05.262 11:47:10 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:33:05.262 11:47:10 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:33:05.262 11:47:10 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:05.262 11:47:10 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:05.520 11:47:11 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=ab1c356b-66e4-421a-9a40-1c365cb70cdb 00:33:05.520 11:47:11 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:33:05.520 11:47:11 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ab1c356b-66e4-421a-9a40-1c365cb70cdb 00:33:06.087 11:47:11 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:33:06.346 11:47:11 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=64138c4b-6e57-4458-8136-3fabf739017a 00:33:06.346 11:47:11 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 64138c4b-6e57-4458-8136-3fabf739017a 00:33:06.603 11:47:12 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=88bb6a27-b995-4733-abc8-53533dfda38d 00:33:06.603 11:47:12 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:33:06.603 11:47:12 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 88bb6a27-b995-4733-abc8-53533dfda38d 00:33:06.603 11:47:12 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:33:06.603 11:47:12 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:33:06.603 11:47:12 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=88bb6a27-b995-4733-abc8-53533dfda38d 00:33:06.603 11:47:12 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:33:06.603 11:47:12 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 88bb6a27-b995-4733-abc8-53533dfda38d 00:33:06.603 11:47:12 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=88bb6a27-b995-4733-abc8-53533dfda38d 00:33:06.603 11:47:12 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:33:06.603 11:47:12 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:33:06.603 11:47:12 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:33:06.603 11:47:12 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 88bb6a27-b995-4733-abc8-53533dfda38d 00:33:06.861 11:47:12 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:33:06.861 { 00:33:06.861 "name": "88bb6a27-b995-4733-abc8-53533dfda38d", 00:33:06.861 "aliases": [ 00:33:06.861 "lvs/nvme0n1p0" 00:33:06.861 ], 00:33:06.861 "product_name": "Logical Volume", 00:33:06.861 "block_size": 4096, 00:33:06.861 "num_blocks": 26476544, 00:33:06.861 "uuid": "88bb6a27-b995-4733-abc8-53533dfda38d", 00:33:06.861 "assigned_rate_limits": { 00:33:06.861 "rw_ios_per_sec": 0, 00:33:06.861 "rw_mbytes_per_sec": 0, 00:33:06.861 "r_mbytes_per_sec": 0, 00:33:06.861 "w_mbytes_per_sec": 0 00:33:06.861 }, 00:33:06.861 "claimed": false, 00:33:06.861 "zoned": false, 00:33:06.861 "supported_io_types": { 00:33:06.861 "read": true, 00:33:06.861 "write": true, 00:33:06.861 "unmap": true, 00:33:06.861 "flush": false, 00:33:06.861 "reset": true, 00:33:06.861 "nvme_admin": false, 00:33:06.861 "nvme_io": false, 00:33:06.861 "nvme_io_md": false, 00:33:06.861 "write_zeroes": true, 00:33:06.861 "zcopy": false, 00:33:06.861 "get_zone_info": false, 00:33:06.861 "zone_management": false, 00:33:06.861 "zone_append": false, 00:33:06.861 "compare": false, 00:33:06.861 "compare_and_write": false, 00:33:06.861 "abort": false, 00:33:06.861 "seek_hole": true, 00:33:06.861 "seek_data": true, 00:33:06.861 "copy": false, 00:33:06.861 "nvme_iov_md": false 00:33:06.861 }, 00:33:06.861 "driver_specific": { 00:33:06.861 "lvol": { 00:33:06.861 "lvol_store_uuid": "64138c4b-6e57-4458-8136-3fabf739017a", 00:33:06.861 "base_bdev": "nvme0n1", 00:33:06.861 "thin_provision": true, 00:33:06.861 "num_allocated_clusters": 0, 00:33:06.861 "snapshot": false, 00:33:06.861 "clone": false, 00:33:06.861 "esnap_clone": false 00:33:06.861 } 00:33:06.861 } 00:33:06.861 } 00:33:06.861 ]' 00:33:06.861 11:47:12 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:33:06.861 11:47:12 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:33:06.861 11:47:12 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:33:06.861 11:47:12 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:33:06.861 11:47:12 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:33:06.861 11:47:12 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:33:06.861 11:47:12 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:33:06.861 11:47:12 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:33:06.861 11:47:12 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:33:07.118 11:47:12 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:33:07.118 11:47:12 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:33:07.118 11:47:12 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 88bb6a27-b995-4733-abc8-53533dfda38d 00:33:07.118 11:47:12 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=88bb6a27-b995-4733-abc8-53533dfda38d 00:33:07.118 11:47:12 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:33:07.118 11:47:12 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:33:07.118 11:47:12 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:33:07.118 11:47:12 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 88bb6a27-b995-4733-abc8-53533dfda38d 00:33:07.376 11:47:13 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:33:07.376 { 00:33:07.376 "name": "88bb6a27-b995-4733-abc8-53533dfda38d", 00:33:07.376 "aliases": [ 00:33:07.376 "lvs/nvme0n1p0" 00:33:07.376 ], 00:33:07.376 "product_name": "Logical Volume", 00:33:07.376 "block_size": 4096, 00:33:07.376 "num_blocks": 26476544, 00:33:07.376 "uuid": "88bb6a27-b995-4733-abc8-53533dfda38d", 00:33:07.376 "assigned_rate_limits": { 00:33:07.376 "rw_ios_per_sec": 0, 00:33:07.376 "rw_mbytes_per_sec": 0, 00:33:07.376 "r_mbytes_per_sec": 0, 00:33:07.376 "w_mbytes_per_sec": 0 00:33:07.376 }, 00:33:07.376 "claimed": false, 00:33:07.376 "zoned": false, 00:33:07.376 "supported_io_types": { 00:33:07.376 "read": true, 00:33:07.376 "write": true, 00:33:07.376 "unmap": true, 00:33:07.376 "flush": false, 00:33:07.376 "reset": true, 00:33:07.376 "nvme_admin": false, 00:33:07.376 "nvme_io": false, 00:33:07.376 "nvme_io_md": false, 00:33:07.376 "write_zeroes": true, 00:33:07.376 "zcopy": false, 00:33:07.376 "get_zone_info": false, 00:33:07.376 "zone_management": false, 00:33:07.376 "zone_append": false, 00:33:07.376 "compare": false, 00:33:07.376 "compare_and_write": false, 00:33:07.376 "abort": false, 00:33:07.376 "seek_hole": true, 00:33:07.376 "seek_data": true, 00:33:07.376 "copy": false, 00:33:07.376 "nvme_iov_md": false 00:33:07.376 }, 00:33:07.376 "driver_specific": { 00:33:07.376 "lvol": { 00:33:07.376 "lvol_store_uuid": "64138c4b-6e57-4458-8136-3fabf739017a", 00:33:07.376 "base_bdev": "nvme0n1", 00:33:07.376 "thin_provision": true, 00:33:07.376 "num_allocated_clusters": 0, 00:33:07.376 "snapshot": false, 00:33:07.376 "clone": false, 00:33:07.376 "esnap_clone": false 00:33:07.376 } 00:33:07.376 } 00:33:07.376 } 00:33:07.376 ]' 00:33:07.376 11:47:13 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:33:07.633 11:47:13 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:33:07.633 11:47:13 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:33:07.633 11:47:13 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:33:07.633 11:47:13 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:33:07.633 11:47:13 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:33:07.633 11:47:13 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:33:07.633 11:47:13 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:33:07.891 11:47:13 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:33:07.891 11:47:13 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 88bb6a27-b995-4733-abc8-53533dfda38d 00:33:07.891 11:47:13 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=88bb6a27-b995-4733-abc8-53533dfda38d 00:33:07.891 11:47:13 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:33:07.891 11:47:13 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:33:07.891 11:47:13 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:33:07.891 11:47:13 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 88bb6a27-b995-4733-abc8-53533dfda38d 00:33:08.150 11:47:13 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:33:08.150 { 00:33:08.150 "name": "88bb6a27-b995-4733-abc8-53533dfda38d", 00:33:08.150 "aliases": [ 00:33:08.150 "lvs/nvme0n1p0" 00:33:08.150 ], 00:33:08.150 "product_name": "Logical Volume", 00:33:08.150 "block_size": 4096, 00:33:08.150 "num_blocks": 26476544, 00:33:08.150 "uuid": "88bb6a27-b995-4733-abc8-53533dfda38d", 00:33:08.150 "assigned_rate_limits": { 00:33:08.150 "rw_ios_per_sec": 0, 00:33:08.150 "rw_mbytes_per_sec": 0, 00:33:08.150 "r_mbytes_per_sec": 0, 00:33:08.150 "w_mbytes_per_sec": 0 00:33:08.150 }, 00:33:08.150 "claimed": false, 00:33:08.150 "zoned": false, 00:33:08.150 "supported_io_types": { 00:33:08.150 "read": true, 00:33:08.150 "write": true, 00:33:08.150 "unmap": true, 00:33:08.150 "flush": false, 00:33:08.150 "reset": true, 00:33:08.150 "nvme_admin": false, 00:33:08.150 "nvme_io": false, 00:33:08.150 "nvme_io_md": false, 00:33:08.150 "write_zeroes": true, 00:33:08.150 "zcopy": false, 00:33:08.150 "get_zone_info": false, 00:33:08.150 "zone_management": false, 00:33:08.150 "zone_append": false, 00:33:08.150 "compare": false, 00:33:08.150 "compare_and_write": false, 00:33:08.150 "abort": false, 00:33:08.150 "seek_hole": true, 00:33:08.150 "seek_data": true, 00:33:08.150 "copy": false, 00:33:08.150 "nvme_iov_md": false 00:33:08.150 }, 00:33:08.150 "driver_specific": { 00:33:08.150 "lvol": { 00:33:08.150 "lvol_store_uuid": "64138c4b-6e57-4458-8136-3fabf739017a", 00:33:08.150 "base_bdev": "nvme0n1", 00:33:08.150 "thin_provision": true, 00:33:08.150 "num_allocated_clusters": 0, 00:33:08.150 "snapshot": false, 00:33:08.150 "clone": false, 00:33:08.150 "esnap_clone": false 00:33:08.150 } 00:33:08.150 } 00:33:08.150 } 00:33:08.150 ]' 00:33:08.150 11:47:13 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:33:08.150 11:47:13 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:33:08.150 11:47:13 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:33:08.150 11:47:13 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:33:08.150 11:47:13 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:33:08.150 11:47:13 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:33:08.150 11:47:13 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:33:08.150 11:47:13 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 88bb6a27-b995-4733-abc8-53533dfda38d --l2p_dram_limit 10' 00:33:08.150 11:47:13 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:33:08.150 11:47:13 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:33:08.150 11:47:13 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:33:08.150 11:47:13 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:33:08.150 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:33:08.150 11:47:13 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 88bb6a27-b995-4733-abc8-53533dfda38d --l2p_dram_limit 10 -c nvc0n1p0 00:33:08.410 [2024-11-20 11:47:14.063839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.410 [2024-11-20 11:47:14.063908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:08.410 [2024-11-20 11:47:14.063935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:08.410 [2024-11-20 11:47:14.063948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.410 [2024-11-20 11:47:14.064027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.410 [2024-11-20 11:47:14.064045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:08.410 [2024-11-20 11:47:14.064060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:33:08.410 [2024-11-20 11:47:14.064071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.410 [2024-11-20 11:47:14.064108] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:08.410 [2024-11-20 11:47:14.065266] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:08.410 [2024-11-20 11:47:14.065309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.410 [2024-11-20 11:47:14.065324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:08.410 [2024-11-20 11:47:14.065340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.211 ms 00:33:08.410 [2024-11-20 11:47:14.065351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.410 [2024-11-20 11:47:14.065517] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5519a080-ca77-47db-88d7-988f5a6d6cec 00:33:08.410 [2024-11-20 11:47:14.067482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.410 [2024-11-20 11:47:14.067542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:33:08.410 [2024-11-20 11:47:14.067570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:33:08.410 [2024-11-20 11:47:14.067589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.410 [2024-11-20 11:47:14.078152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.410 [2024-11-20 11:47:14.078219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:08.410 [2024-11-20 11:47:14.078255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.445 ms 00:33:08.410 [2024-11-20 11:47:14.078268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.410 [2024-11-20 11:47:14.078403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.410 [2024-11-20 11:47:14.078428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:08.410 [2024-11-20 11:47:14.078442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:33:08.410 [2024-11-20 11:47:14.078460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.410 [2024-11-20 11:47:14.078533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.410 [2024-11-20 11:47:14.078556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:08.410 [2024-11-20 11:47:14.078569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:08.410 [2024-11-20 11:47:14.078635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.410 [2024-11-20 11:47:14.078672] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:08.410 [2024-11-20 11:47:14.083677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.410 [2024-11-20 11:47:14.083715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:08.410 [2024-11-20 11:47:14.083752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.011 ms 00:33:08.410 [2024-11-20 11:47:14.083763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.410 [2024-11-20 11:47:14.083806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.410 [2024-11-20 11:47:14.083820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:08.410 [2024-11-20 11:47:14.083835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:08.410 [2024-11-20 11:47:14.083845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.410 [2024-11-20 11:47:14.083891] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:33:08.410 [2024-11-20 11:47:14.084037] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:08.410 [2024-11-20 11:47:14.084061] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:08.410 [2024-11-20 11:47:14.084075] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:08.410 [2024-11-20 11:47:14.084092] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:08.410 [2024-11-20 11:47:14.084104] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:08.410 [2024-11-20 11:47:14.084117] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:08.410 [2024-11-20 11:47:14.084128] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:08.410 [2024-11-20 11:47:14.084143] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:08.410 [2024-11-20 11:47:14.084153] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:08.410 [2024-11-20 11:47:14.084166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.410 [2024-11-20 11:47:14.084177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:08.410 [2024-11-20 11:47:14.084190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:33:08.410 [2024-11-20 11:47:14.084212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.410 [2024-11-20 11:47:14.084300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.410 [2024-11-20 11:47:14.084331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:08.410 [2024-11-20 11:47:14.084345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:33:08.410 [2024-11-20 11:47:14.084356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.410 [2024-11-20 11:47:14.084467] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:08.410 [2024-11-20 11:47:14.084485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:08.410 [2024-11-20 11:47:14.084499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:08.410 [2024-11-20 11:47:14.084511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:08.410 [2024-11-20 11:47:14.084524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:08.410 [2024-11-20 11:47:14.084534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:08.410 [2024-11-20 11:47:14.084546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:08.410 [2024-11-20 11:47:14.084600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:08.410 [2024-11-20 11:47:14.084616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:08.410 [2024-11-20 11:47:14.084627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:08.410 [2024-11-20 11:47:14.084671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:08.410 [2024-11-20 11:47:14.084683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:08.410 [2024-11-20 11:47:14.084695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:08.410 [2024-11-20 11:47:14.084706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:08.410 [2024-11-20 11:47:14.084719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:08.410 [2024-11-20 11:47:14.084729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:08.410 [2024-11-20 11:47:14.084744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:08.410 [2024-11-20 11:47:14.084759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:08.410 [2024-11-20 11:47:14.084774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:08.410 [2024-11-20 11:47:14.084785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:08.410 [2024-11-20 11:47:14.084797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:08.410 [2024-11-20 11:47:14.084807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:08.410 [2024-11-20 11:47:14.084819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:08.410 [2024-11-20 11:47:14.084830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:08.410 [2024-11-20 11:47:14.084842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:08.410 [2024-11-20 11:47:14.084852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:08.410 [2024-11-20 11:47:14.084863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:08.410 [2024-11-20 11:47:14.084873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:08.410 [2024-11-20 11:47:14.084885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:08.410 [2024-11-20 11:47:14.084895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:08.410 [2024-11-20 11:47:14.084907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:08.410 [2024-11-20 11:47:14.084917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:08.410 [2024-11-20 11:47:14.084931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:08.410 [2024-11-20 11:47:14.084958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:08.410 [2024-11-20 11:47:14.084971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:08.410 [2024-11-20 11:47:14.084981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:08.410 [2024-11-20 11:47:14.084993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:08.410 [2024-11-20 11:47:14.085003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:08.410 [2024-11-20 11:47:14.085026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:08.411 [2024-11-20 11:47:14.085039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:08.411 [2024-11-20 11:47:14.085055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:08.411 [2024-11-20 11:47:14.085067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:08.411 [2024-11-20 11:47:14.085082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:08.411 [2024-11-20 11:47:14.085094] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:08.411 [2024-11-20 11:47:14.085112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:08.411 [2024-11-20 11:47:14.085126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:08.411 [2024-11-20 11:47:14.085143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:08.411 [2024-11-20 11:47:14.085154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:08.411 [2024-11-20 11:47:14.085170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:08.411 [2024-11-20 11:47:14.085181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:08.411 [2024-11-20 11:47:14.085196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:08.411 [2024-11-20 11:47:14.085207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:08.411 [2024-11-20 11:47:14.085220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:08.411 [2024-11-20 11:47:14.085235] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:08.411 [2024-11-20 11:47:14.085290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:08.411 [2024-11-20 11:47:14.085307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:08.411 [2024-11-20 11:47:14.085322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:08.411 [2024-11-20 11:47:14.085334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:08.411 [2024-11-20 11:47:14.085348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:08.411 [2024-11-20 11:47:14.085389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:08.411 [2024-11-20 11:47:14.085403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:08.411 [2024-11-20 11:47:14.085415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:08.411 [2024-11-20 11:47:14.085429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:08.411 [2024-11-20 11:47:14.085440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:08.411 [2024-11-20 11:47:14.085456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:08.411 [2024-11-20 11:47:14.085468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:08.411 [2024-11-20 11:47:14.085482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:08.411 [2024-11-20 11:47:14.085494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:08.411 [2024-11-20 11:47:14.085510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:08.411 [2024-11-20 11:47:14.085523] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:08.411 [2024-11-20 11:47:14.085538] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:08.411 [2024-11-20 11:47:14.085551] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:08.411 [2024-11-20 11:47:14.085597] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:08.411 [2024-11-20 11:47:14.085641] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:08.411 [2024-11-20 11:47:14.085658] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:08.411 [2024-11-20 11:47:14.085672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.411 [2024-11-20 11:47:14.085687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:08.411 [2024-11-20 11:47:14.085706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.268 ms 00:33:08.411 [2024-11-20 11:47:14.085721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.411 [2024-11-20 11:47:14.085794] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:33:08.411 [2024-11-20 11:47:14.085848] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:33:11.690 [2024-11-20 11:47:16.787871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.690 [2024-11-20 11:47:16.787976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:33:11.690 [2024-11-20 11:47:16.787999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2702.085 ms 00:33:11.690 [2024-11-20 11:47:16.788014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.690 [2024-11-20 11:47:16.825281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.690 [2024-11-20 11:47:16.825363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:11.691 [2024-11-20 11:47:16.825385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.016 ms 00:33:11.691 [2024-11-20 11:47:16.825400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.691 [2024-11-20 11:47:16.825672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.691 [2024-11-20 11:47:16.825715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:11.691 [2024-11-20 11:47:16.825747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:33:11.691 [2024-11-20 11:47:16.825764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.691 [2024-11-20 11:47:16.866961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.691 [2024-11-20 11:47:16.867029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:11.691 [2024-11-20 11:47:16.867047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.136 ms 00:33:11.691 [2024-11-20 11:47:16.867061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.691 [2024-11-20 11:47:16.867104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.691 [2024-11-20 11:47:16.867134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:11.691 [2024-11-20 11:47:16.867147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:11.691 [2024-11-20 11:47:16.867160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.691 [2024-11-20 11:47:16.867873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.691 [2024-11-20 11:47:16.867908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:11.691 [2024-11-20 11:47:16.867938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.619 ms 00:33:11.691 [2024-11-20 11:47:16.867952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.691 [2024-11-20 11:47:16.868115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.691 [2024-11-20 11:47:16.868134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:11.691 [2024-11-20 11:47:16.868166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:33:11.691 [2024-11-20 11:47:16.868181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.691 [2024-11-20 11:47:16.890283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.691 [2024-11-20 11:47:16.890508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:11.691 [2024-11-20 11:47:16.890594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.076 ms 00:33:11.691 [2024-11-20 11:47:16.890615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.691 [2024-11-20 11:47:16.904647] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:11.691 [2024-11-20 11:47:16.908900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.691 [2024-11-20 11:47:16.908932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:11.691 [2024-11-20 11:47:16.908966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.176 ms 00:33:11.691 [2024-11-20 11:47:16.908978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.691 [2024-11-20 11:47:16.986614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.691 [2024-11-20 11:47:16.986688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:33:11.691 [2024-11-20 11:47:16.986713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.598 ms 00:33:11.691 [2024-11-20 11:47:16.986725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.691 [2024-11-20 11:47:16.986941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.691 [2024-11-20 11:47:16.986963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:11.691 [2024-11-20 11:47:16.986982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:33:11.691 [2024-11-20 11:47:16.986993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.691 [2024-11-20 11:47:17.014022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.691 [2024-11-20 11:47:17.014259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:33:11.691 [2024-11-20 11:47:17.014296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.963 ms 00:33:11.691 [2024-11-20 11:47:17.014311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.691 [2024-11-20 11:47:17.040535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.691 [2024-11-20 11:47:17.040581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:33:11.691 [2024-11-20 11:47:17.040619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.165 ms 00:33:11.691 [2024-11-20 11:47:17.040630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.691 [2024-11-20 11:47:17.041448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.691 [2024-11-20 11:47:17.041477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:11.691 [2024-11-20 11:47:17.041494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.772 ms 00:33:11.691 [2024-11-20 11:47:17.041506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.691 [2024-11-20 11:47:17.122415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.691 [2024-11-20 11:47:17.122684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:33:11.691 [2024-11-20 11:47:17.122725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.824 ms 00:33:11.691 [2024-11-20 11:47:17.122740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.691 [2024-11-20 11:47:17.152032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.691 [2024-11-20 11:47:17.152076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:33:11.691 [2024-11-20 11:47:17.152113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.188 ms 00:33:11.691 [2024-11-20 11:47:17.152125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.691 [2024-11-20 11:47:17.179194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.691 [2024-11-20 11:47:17.179236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:33:11.691 [2024-11-20 11:47:17.179270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.021 ms 00:33:11.691 [2024-11-20 11:47:17.179281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.691 [2024-11-20 11:47:17.206364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.691 [2024-11-20 11:47:17.206406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:11.691 [2024-11-20 11:47:17.206442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.020 ms 00:33:11.691 [2024-11-20 11:47:17.206453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.691 [2024-11-20 11:47:17.206508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.691 [2024-11-20 11:47:17.206526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:11.691 [2024-11-20 11:47:17.206581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:11.691 [2024-11-20 11:47:17.206594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.691 [2024-11-20 11:47:17.206744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.691 [2024-11-20 11:47:17.206763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:11.691 [2024-11-20 11:47:17.206781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:33:11.691 [2024-11-20 11:47:17.206793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.691 [2024-11-20 11:47:17.208216] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3143.820 ms, result 0 00:33:11.691 { 00:33:11.691 "name": "ftl0", 00:33:11.691 "uuid": "5519a080-ca77-47db-88d7-988f5a6d6cec" 00:33:11.691 } 00:33:11.691 11:47:17 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:33:11.691 11:47:17 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:33:11.948 11:47:17 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:33:11.948 11:47:17 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:33:12.205 [2024-11-20 11:47:17.739341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:12.205 [2024-11-20 11:47:17.739419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:12.205 [2024-11-20 11:47:17.739442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:12.205 [2024-11-20 11:47:17.739468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.205 [2024-11-20 11:47:17.739513] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:12.205 [2024-11-20 11:47:17.743172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:12.205 [2024-11-20 11:47:17.743209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:12.205 [2024-11-20 11:47:17.743229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.630 ms 00:33:12.205 [2024-11-20 11:47:17.743241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.205 [2024-11-20 11:47:17.743574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:12.205 [2024-11-20 11:47:17.743596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:12.205 [2024-11-20 11:47:17.743615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:33:12.205 [2024-11-20 11:47:17.743627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.205 [2024-11-20 11:47:17.746556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:12.205 [2024-11-20 11:47:17.746588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:33:12.205 [2024-11-20 11:47:17.746620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.904 ms 00:33:12.205 [2024-11-20 11:47:17.746631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.205 [2024-11-20 11:47:17.752559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:12.205 [2024-11-20 11:47:17.752595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:33:12.205 [2024-11-20 11:47:17.752617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.886 ms 00:33:12.205 [2024-11-20 11:47:17.752628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.205 [2024-11-20 11:47:17.781045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:12.205 [2024-11-20 11:47:17.781086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:33:12.205 [2024-11-20 11:47:17.781122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.328 ms 00:33:12.205 [2024-11-20 11:47:17.781133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.205 [2024-11-20 11:47:17.798740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:12.205 [2024-11-20 11:47:17.798780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:33:12.205 [2024-11-20 11:47:17.798815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.553 ms 00:33:12.205 [2024-11-20 11:47:17.798827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.205 [2024-11-20 11:47:17.798996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:12.205 [2024-11-20 11:47:17.799017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:33:12.205 [2024-11-20 11:47:17.799032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:33:12.205 [2024-11-20 11:47:17.799044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.205 [2024-11-20 11:47:17.827780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:12.205 [2024-11-20 11:47:17.827987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:33:12.205 [2024-11-20 11:47:17.828121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.708 ms 00:33:12.205 [2024-11-20 11:47:17.828173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.205 [2024-11-20 11:47:17.859237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:12.205 [2024-11-20 11:47:17.859438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:33:12.205 [2024-11-20 11:47:17.859650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.774 ms 00:33:12.205 [2024-11-20 11:47:17.859676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.205 [2024-11-20 11:47:17.887665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:12.205 [2024-11-20 11:47:17.887710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:33:12.205 [2024-11-20 11:47:17.887733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.898 ms 00:33:12.205 [2024-11-20 11:47:17.887745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.205 [2024-11-20 11:47:17.918768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:12.205 [2024-11-20 11:47:17.918811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:33:12.205 [2024-11-20 11:47:17.918833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.872 ms 00:33:12.205 [2024-11-20 11:47:17.918845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.205 [2024-11-20 11:47:17.918920] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:12.205 [2024-11-20 11:47:17.918973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.918990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:12.205 [2024-11-20 11:47:17.919572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.919585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.919626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.919642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.919657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.919675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.919691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.919703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.919717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.919730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.919747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.919759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.919788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.919801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.919816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.919828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.919842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.919855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.919869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.919881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.919901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.919914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.919940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.919952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.919967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.919979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.919999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.920012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.920027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.920039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.920054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.920066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.920081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.920093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.920108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.920120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.920135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.920147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.920162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.920174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.920188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.920200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.920217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.920229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.920244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.920257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.920272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.920284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.920299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.920311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.920325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.920338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.920358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.920370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.920387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:12.206 [2024-11-20 11:47:17.920407] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:12.206 [2024-11-20 11:47:17.920426] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5519a080-ca77-47db-88d7-988f5a6d6cec 00:33:12.206 [2024-11-20 11:47:17.920438] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:12.206 [2024-11-20 11:47:17.920461] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:33:12.206 [2024-11-20 11:47:17.920473] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:12.206 [2024-11-20 11:47:17.920491] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:12.206 [2024-11-20 11:47:17.920503] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:12.206 [2024-11-20 11:47:17.920517] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:12.206 [2024-11-20 11:47:17.920529] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:12.206 [2024-11-20 11:47:17.920557] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:12.206 [2024-11-20 11:47:17.920569] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:12.206 [2024-11-20 11:47:17.920583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:12.206 [2024-11-20 11:47:17.920596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:12.206 [2024-11-20 11:47:17.920618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.667 ms 00:33:12.206 [2024-11-20 11:47:17.920630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.206 [2024-11-20 11:47:17.937532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:12.206 [2024-11-20 11:47:17.937629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:12.206 [2024-11-20 11:47:17.937651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.827 ms 00:33:12.206 [2024-11-20 11:47:17.937662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.206 [2024-11-20 11:47:17.938151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:12.206 [2024-11-20 11:47:17.938182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:12.206 [2024-11-20 11:47:17.938203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.453 ms 00:33:12.206 [2024-11-20 11:47:17.938218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.462 [2024-11-20 11:47:17.989025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.462 [2024-11-20 11:47:17.989070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:12.462 [2024-11-20 11:47:17.989105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.462 [2024-11-20 11:47:17.989117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.462 [2024-11-20 11:47:17.989186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.462 [2024-11-20 11:47:17.989201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:12.462 [2024-11-20 11:47:17.989214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.462 [2024-11-20 11:47:17.989227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.462 [2024-11-20 11:47:17.989390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.462 [2024-11-20 11:47:17.989410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:12.462 [2024-11-20 11:47:17.989425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.462 [2024-11-20 11:47:17.989436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.462 [2024-11-20 11:47:17.989467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.462 [2024-11-20 11:47:17.989481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:12.462 [2024-11-20 11:47:17.989494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.462 [2024-11-20 11:47:17.989504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.462 [2024-11-20 11:47:18.080299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.462 [2024-11-20 11:47:18.080363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:12.462 [2024-11-20 11:47:18.080400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.462 [2024-11-20 11:47:18.080412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.462 [2024-11-20 11:47:18.159717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.462 [2024-11-20 11:47:18.159775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:12.462 [2024-11-20 11:47:18.159812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.462 [2024-11-20 11:47:18.159827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.462 [2024-11-20 11:47:18.159939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.462 [2024-11-20 11:47:18.159957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:12.462 [2024-11-20 11:47:18.159972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.463 [2024-11-20 11:47:18.159982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.463 [2024-11-20 11:47:18.160073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.463 [2024-11-20 11:47:18.160091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:12.463 [2024-11-20 11:47:18.160106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.463 [2024-11-20 11:47:18.160116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.463 [2024-11-20 11:47:18.160240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.463 [2024-11-20 11:47:18.160258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:12.463 [2024-11-20 11:47:18.160272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.463 [2024-11-20 11:47:18.160283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.463 [2024-11-20 11:47:18.160358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.463 [2024-11-20 11:47:18.160376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:12.463 [2024-11-20 11:47:18.160390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.463 [2024-11-20 11:47:18.160401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.463 [2024-11-20 11:47:18.160451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.463 [2024-11-20 11:47:18.160468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:12.463 [2024-11-20 11:47:18.160482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.463 [2024-11-20 11:47:18.160493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.463 [2024-11-20 11:47:18.160554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.463 [2024-11-20 11:47:18.160614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:12.463 [2024-11-20 11:47:18.160633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.463 [2024-11-20 11:47:18.160644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.463 [2024-11-20 11:47:18.160845] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 421.460 ms, result 0 00:33:12.463 true 00:33:12.463 11:47:18 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79191 00:33:12.463 11:47:18 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79191 ']' 00:33:12.463 11:47:18 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79191 00:33:12.463 11:47:18 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:33:12.463 11:47:18 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:12.463 11:47:18 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79191 00:33:12.463 11:47:18 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:12.463 11:47:18 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:12.463 11:47:18 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79191' 00:33:12.463 killing process with pid 79191 00:33:12.463 11:47:18 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 79191 00:33:12.463 11:47:18 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 79191 00:33:17.719 11:47:22 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:33:21.922 262144+0 records in 00:33:21.922 262144+0 records out 00:33:21.922 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.4396 s, 242 MB/s 00:33:21.922 11:47:27 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:33:23.822 11:47:29 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:23.822 [2024-11-20 11:47:29.194187] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:33:23.822 [2024-11-20 11:47:29.194666] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79432 ] 00:33:23.822 [2024-11-20 11:47:29.375395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:23.822 [2024-11-20 11:47:29.528860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:24.389 [2024-11-20 11:47:29.863802] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:24.389 [2024-11-20 11:47:29.863892] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:24.389 [2024-11-20 11:47:30.032706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.389 [2024-11-20 11:47:30.032757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:24.389 [2024-11-20 11:47:30.032803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:33:24.389 [2024-11-20 11:47:30.032814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.389 [2024-11-20 11:47:30.032880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.389 [2024-11-20 11:47:30.032898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:24.389 [2024-11-20 11:47:30.032918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:33:24.389 [2024-11-20 11:47:30.032928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.389 [2024-11-20 11:47:30.032957] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:24.389 [2024-11-20 11:47:30.034028] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:24.389 [2024-11-20 11:47:30.034073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.389 [2024-11-20 11:47:30.034088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:24.389 [2024-11-20 11:47:30.034100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.122 ms 00:33:24.389 [2024-11-20 11:47:30.034111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.389 [2024-11-20 11:47:30.036242] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:33:24.389 [2024-11-20 11:47:30.052021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.389 [2024-11-20 11:47:30.052065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:24.389 [2024-11-20 11:47:30.052097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.780 ms 00:33:24.389 [2024-11-20 11:47:30.052107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.389 [2024-11-20 11:47:30.052179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.389 [2024-11-20 11:47:30.052198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:24.389 [2024-11-20 11:47:30.052210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:33:24.389 [2024-11-20 11:47:30.052219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.389 [2024-11-20 11:47:30.060885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.389 [2024-11-20 11:47:30.060926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:24.389 [2024-11-20 11:47:30.060956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.581 ms 00:33:24.389 [2024-11-20 11:47:30.060967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.389 [2024-11-20 11:47:30.061060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.389 [2024-11-20 11:47:30.061078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:24.389 [2024-11-20 11:47:30.061089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:33:24.389 [2024-11-20 11:47:30.061100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.389 [2024-11-20 11:47:30.061150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.389 [2024-11-20 11:47:30.061166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:24.389 [2024-11-20 11:47:30.061178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:24.389 [2024-11-20 11:47:30.061188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.389 [2024-11-20 11:47:30.061219] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:24.389 [2024-11-20 11:47:30.066020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.389 [2024-11-20 11:47:30.066057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:24.389 [2024-11-20 11:47:30.066088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.808 ms 00:33:24.389 [2024-11-20 11:47:30.066103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.389 [2024-11-20 11:47:30.066139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.389 [2024-11-20 11:47:30.066154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:24.389 [2024-11-20 11:47:30.066165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:33:24.389 [2024-11-20 11:47:30.066174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.389 [2024-11-20 11:47:30.066234] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:24.389 [2024-11-20 11:47:30.066264] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:24.389 [2024-11-20 11:47:30.066302] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:24.389 [2024-11-20 11:47:30.066355] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:24.389 [2024-11-20 11:47:30.066459] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:24.389 [2024-11-20 11:47:30.066474] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:24.389 [2024-11-20 11:47:30.066489] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:24.389 [2024-11-20 11:47:30.066503] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:24.389 [2024-11-20 11:47:30.066517] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:24.389 [2024-11-20 11:47:30.066528] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:24.389 [2024-11-20 11:47:30.066539] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:24.389 [2024-11-20 11:47:30.066549] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:24.389 [2024-11-20 11:47:30.066559] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:24.389 [2024-11-20 11:47:30.066600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.389 [2024-11-20 11:47:30.066633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:24.389 [2024-11-20 11:47:30.066646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:33:24.389 [2024-11-20 11:47:30.066656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.389 [2024-11-20 11:47:30.066795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.389 [2024-11-20 11:47:30.066811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:24.389 [2024-11-20 11:47:30.066822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:33:24.389 [2024-11-20 11:47:30.066833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.389 [2024-11-20 11:47:30.066944] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:24.389 [2024-11-20 11:47:30.066968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:24.389 [2024-11-20 11:47:30.066981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:24.389 [2024-11-20 11:47:30.067007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:24.389 [2024-11-20 11:47:30.067018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:24.389 [2024-11-20 11:47:30.067043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:24.389 [2024-11-20 11:47:30.067053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:24.389 [2024-11-20 11:47:30.067063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:24.389 [2024-11-20 11:47:30.067073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:24.389 [2024-11-20 11:47:30.067099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:24.389 [2024-11-20 11:47:30.067109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:24.389 [2024-11-20 11:47:30.067119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:24.389 [2024-11-20 11:47:30.067129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:24.389 [2024-11-20 11:47:30.067140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:24.389 [2024-11-20 11:47:30.067153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:24.389 [2024-11-20 11:47:30.067179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:24.389 [2024-11-20 11:47:30.067190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:24.389 [2024-11-20 11:47:30.067200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:24.389 [2024-11-20 11:47:30.067211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:24.389 [2024-11-20 11:47:30.067221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:24.389 [2024-11-20 11:47:30.067232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:24.389 [2024-11-20 11:47:30.067242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:24.389 [2024-11-20 11:47:30.067252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:24.389 [2024-11-20 11:47:30.067262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:24.389 [2024-11-20 11:47:30.067273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:24.389 [2024-11-20 11:47:30.067282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:24.389 [2024-11-20 11:47:30.067292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:24.389 [2024-11-20 11:47:30.067303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:24.389 [2024-11-20 11:47:30.067313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:24.389 [2024-11-20 11:47:30.067323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:24.389 [2024-11-20 11:47:30.067333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:24.389 [2024-11-20 11:47:30.067343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:24.389 [2024-11-20 11:47:30.067353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:24.389 [2024-11-20 11:47:30.067363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:24.390 [2024-11-20 11:47:30.067373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:24.390 [2024-11-20 11:47:30.067383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:24.390 [2024-11-20 11:47:30.067393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:24.390 [2024-11-20 11:47:30.067403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:24.390 [2024-11-20 11:47:30.067414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:24.390 [2024-11-20 11:47:30.067424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:24.390 [2024-11-20 11:47:30.067433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:24.390 [2024-11-20 11:47:30.067443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:24.390 [2024-11-20 11:47:30.067453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:24.390 [2024-11-20 11:47:30.067463] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:24.390 [2024-11-20 11:47:30.067475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:24.390 [2024-11-20 11:47:30.067485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:24.390 [2024-11-20 11:47:30.067497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:24.390 [2024-11-20 11:47:30.067508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:24.390 [2024-11-20 11:47:30.067519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:24.390 [2024-11-20 11:47:30.067530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:24.390 [2024-11-20 11:47:30.067540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:24.390 [2024-11-20 11:47:30.067550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:24.390 [2024-11-20 11:47:30.067561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:24.390 [2024-11-20 11:47:30.067572] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:24.390 [2024-11-20 11:47:30.067586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:24.390 [2024-11-20 11:47:30.067598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:24.390 [2024-11-20 11:47:30.067609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:24.390 [2024-11-20 11:47:30.067620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:24.390 [2024-11-20 11:47:30.067631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:24.390 [2024-11-20 11:47:30.067659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:24.390 [2024-11-20 11:47:30.067671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:24.390 [2024-11-20 11:47:30.067682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:24.390 [2024-11-20 11:47:30.067693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:24.390 [2024-11-20 11:47:30.067704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:24.390 [2024-11-20 11:47:30.067715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:24.390 [2024-11-20 11:47:30.067725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:24.390 [2024-11-20 11:47:30.067736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:24.390 [2024-11-20 11:47:30.067746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:24.390 [2024-11-20 11:47:30.067758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:24.390 [2024-11-20 11:47:30.067769] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:24.390 [2024-11-20 11:47:30.067788] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:24.390 [2024-11-20 11:47:30.067801] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:24.390 [2024-11-20 11:47:30.067812] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:24.390 [2024-11-20 11:47:30.067824] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:24.390 [2024-11-20 11:47:30.067836] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:24.390 [2024-11-20 11:47:30.067848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.390 [2024-11-20 11:47:30.067859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:24.390 [2024-11-20 11:47:30.067871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.968 ms 00:33:24.390 [2024-11-20 11:47:30.067883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.390 [2024-11-20 11:47:30.106659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.390 [2024-11-20 11:47:30.106749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:24.390 [2024-11-20 11:47:30.106784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.712 ms 00:33:24.390 [2024-11-20 11:47:30.106795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.390 [2024-11-20 11:47:30.106912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.390 [2024-11-20 11:47:30.106927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:24.390 [2024-11-20 11:47:30.106940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:33:24.390 [2024-11-20 11:47:30.106950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.648 [2024-11-20 11:47:30.163719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.649 [2024-11-20 11:47:30.163775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:24.649 [2024-11-20 11:47:30.163809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.673 ms 00:33:24.649 [2024-11-20 11:47:30.163820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.649 [2024-11-20 11:47:30.163873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.649 [2024-11-20 11:47:30.163888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:24.649 [2024-11-20 11:47:30.163900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:24.649 [2024-11-20 11:47:30.163924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.649 [2024-11-20 11:47:30.164570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.649 [2024-11-20 11:47:30.164607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:24.649 [2024-11-20 11:47:30.164620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.514 ms 00:33:24.649 [2024-11-20 11:47:30.164643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.649 [2024-11-20 11:47:30.164837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.649 [2024-11-20 11:47:30.164858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:24.649 [2024-11-20 11:47:30.164870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:33:24.649 [2024-11-20 11:47:30.164893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.649 [2024-11-20 11:47:30.183726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.649 [2024-11-20 11:47:30.183942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:24.649 [2024-11-20 11:47:30.183984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.789 ms 00:33:24.649 [2024-11-20 11:47:30.183998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.649 [2024-11-20 11:47:30.199572] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:33:24.649 [2024-11-20 11:47:30.199617] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:24.649 [2024-11-20 11:47:30.199650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.649 [2024-11-20 11:47:30.199662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:24.649 [2024-11-20 11:47:30.199674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.521 ms 00:33:24.649 [2024-11-20 11:47:30.199684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.649 [2024-11-20 11:47:30.226300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.649 [2024-11-20 11:47:30.226528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:24.649 [2024-11-20 11:47:30.226576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.570 ms 00:33:24.649 [2024-11-20 11:47:30.226598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.649 [2024-11-20 11:47:30.240855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.649 [2024-11-20 11:47:30.241091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:24.649 [2024-11-20 11:47:30.241118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.225 ms 00:33:24.649 [2024-11-20 11:47:30.241129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.649 [2024-11-20 11:47:30.255059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.649 [2024-11-20 11:47:30.255101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:24.649 [2024-11-20 11:47:30.255132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.883 ms 00:33:24.649 [2024-11-20 11:47:30.255142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.649 [2024-11-20 11:47:30.255959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.649 [2024-11-20 11:47:30.255996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:24.649 [2024-11-20 11:47:30.256012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:33:24.649 [2024-11-20 11:47:30.256023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.649 [2024-11-20 11:47:30.328485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.649 [2024-11-20 11:47:30.328593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:24.649 [2024-11-20 11:47:30.328631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.431 ms 00:33:24.649 [2024-11-20 11:47:30.328666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.649 [2024-11-20 11:47:30.340152] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:24.649 [2024-11-20 11:47:30.342940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.649 [2024-11-20 11:47:30.342974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:24.649 [2024-11-20 11:47:30.343005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.175 ms 00:33:24.649 [2024-11-20 11:47:30.343017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.649 [2024-11-20 11:47:30.343111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.649 [2024-11-20 11:47:30.343131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:24.649 [2024-11-20 11:47:30.343143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:24.649 [2024-11-20 11:47:30.343154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.649 [2024-11-20 11:47:30.343252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.649 [2024-11-20 11:47:30.343270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:24.649 [2024-11-20 11:47:30.343281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:33:24.649 [2024-11-20 11:47:30.343292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.649 [2024-11-20 11:47:30.343322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.649 [2024-11-20 11:47:30.343336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:24.649 [2024-11-20 11:47:30.343347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:24.649 [2024-11-20 11:47:30.343357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.649 [2024-11-20 11:47:30.343398] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:24.649 [2024-11-20 11:47:30.343415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.649 [2024-11-20 11:47:30.343430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:24.649 [2024-11-20 11:47:30.343441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:33:24.649 [2024-11-20 11:47:30.343451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.649 [2024-11-20 11:47:30.372992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.649 [2024-11-20 11:47:30.373037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:24.649 [2024-11-20 11:47:30.373070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.518 ms 00:33:24.649 [2024-11-20 11:47:30.373081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.649 [2024-11-20 11:47:30.373182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.649 [2024-11-20 11:47:30.373201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:24.649 [2024-11-20 11:47:30.373213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:33:24.649 [2024-11-20 11:47:30.373223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.649 [2024-11-20 11:47:30.374845] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 341.481 ms, result 0 00:33:26.022  [2024-11-20T11:47:32.720Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-20T11:47:33.656Z] Copying: 46/1024 [MB] (23 MBps) [2024-11-20T11:47:34.591Z] Copying: 69/1024 [MB] (23 MBps) [2024-11-20T11:47:35.525Z] Copying: 92/1024 [MB] (23 MBps) [2024-11-20T11:47:36.460Z] Copying: 116/1024 [MB] (23 MBps) [2024-11-20T11:47:37.398Z] Copying: 139/1024 [MB] (23 MBps) [2024-11-20T11:47:38.774Z] Copying: 162/1024 [MB] (22 MBps) [2024-11-20T11:47:39.724Z] Copying: 185/1024 [MB] (22 MBps) [2024-11-20T11:47:40.659Z] Copying: 209/1024 [MB] (24 MBps) [2024-11-20T11:47:41.595Z] Copying: 234/1024 [MB] (24 MBps) [2024-11-20T11:47:42.545Z] Copying: 257/1024 [MB] (23 MBps) [2024-11-20T11:47:43.481Z] Copying: 280/1024 [MB] (23 MBps) [2024-11-20T11:47:44.416Z] Copying: 304/1024 [MB] (23 MBps) [2024-11-20T11:47:45.791Z] Copying: 328/1024 [MB] (23 MBps) [2024-11-20T11:47:46.725Z] Copying: 352/1024 [MB] (23 MBps) [2024-11-20T11:47:47.661Z] Copying: 376/1024 [MB] (24 MBps) [2024-11-20T11:47:48.596Z] Copying: 399/1024 [MB] (23 MBps) [2024-11-20T11:47:49.530Z] Copying: 423/1024 [MB] (23 MBps) [2024-11-20T11:47:50.464Z] Copying: 446/1024 [MB] (23 MBps) [2024-11-20T11:47:51.398Z] Copying: 469/1024 [MB] (23 MBps) [2024-11-20T11:47:52.773Z] Copying: 494/1024 [MB] (24 MBps) [2024-11-20T11:47:53.707Z] Copying: 519/1024 [MB] (25 MBps) [2024-11-20T11:47:54.643Z] Copying: 544/1024 [MB] (24 MBps) [2024-11-20T11:47:55.578Z] Copying: 569/1024 [MB] (24 MBps) [2024-11-20T11:47:56.512Z] Copying: 592/1024 [MB] (23 MBps) [2024-11-20T11:47:57.447Z] Copying: 616/1024 [MB] (23 MBps) [2024-11-20T11:47:58.441Z] Copying: 639/1024 [MB] (23 MBps) [2024-11-20T11:47:59.393Z] Copying: 663/1024 [MB] (23 MBps) [2024-11-20T11:48:00.768Z] Copying: 687/1024 [MB] (23 MBps) [2024-11-20T11:48:01.704Z] Copying: 710/1024 [MB] (23 MBps) [2024-11-20T11:48:02.641Z] Copying: 734/1024 [MB] (23 MBps) [2024-11-20T11:48:03.575Z] Copying: 757/1024 [MB] (22 MBps) [2024-11-20T11:48:04.510Z] Copying: 781/1024 [MB] (24 MBps) [2024-11-20T11:48:05.471Z] Copying: 805/1024 [MB] (24 MBps) [2024-11-20T11:48:06.406Z] Copying: 829/1024 [MB] (23 MBps) [2024-11-20T11:48:07.782Z] Copying: 853/1024 [MB] (23 MBps) [2024-11-20T11:48:08.718Z] Copying: 876/1024 [MB] (23 MBps) [2024-11-20T11:48:09.655Z] Copying: 900/1024 [MB] (23 MBps) [2024-11-20T11:48:10.589Z] Copying: 924/1024 [MB] (23 MBps) [2024-11-20T11:48:11.524Z] Copying: 947/1024 [MB] (22 MBps) [2024-11-20T11:48:12.458Z] Copying: 971/1024 [MB] (23 MBps) [2024-11-20T11:48:13.393Z] Copying: 994/1024 [MB] (23 MBps) [2024-11-20T11:48:13.666Z] Copying: 1018/1024 [MB] (23 MBps) [2024-11-20T11:48:13.667Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-20 11:48:13.614345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:07.901 [2024-11-20 11:48:13.614407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:07.901 [2024-11-20 11:48:13.614459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:07.901 [2024-11-20 11:48:13.614472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.901 [2024-11-20 11:48:13.614502] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:07.901 [2024-11-20 11:48:13.618124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:07.901 [2024-11-20 11:48:13.618158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:07.901 [2024-11-20 11:48:13.618188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.599 ms 00:34:07.901 [2024-11-20 11:48:13.618198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.901 [2024-11-20 11:48:13.619989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:07.901 [2024-11-20 11:48:13.620048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:07.901 [2024-11-20 11:48:13.620065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.751 ms 00:34:07.901 [2024-11-20 11:48:13.620076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.901 [2024-11-20 11:48:13.636578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:07.901 [2024-11-20 11:48:13.636620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:34:07.901 [2024-11-20 11:48:13.636652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.466 ms 00:34:07.901 [2024-11-20 11:48:13.636663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.901 [2024-11-20 11:48:13.642368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:07.901 [2024-11-20 11:48:13.642415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:34:07.901 [2024-11-20 11:48:13.642444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.667 ms 00:34:07.901 [2024-11-20 11:48:13.642455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.169 [2024-11-20 11:48:13.670203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:08.169 [2024-11-20 11:48:13.670246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:34:08.169 [2024-11-20 11:48:13.670277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.684 ms 00:34:08.169 [2024-11-20 11:48:13.670287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.169 [2024-11-20 11:48:13.686422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:08.169 [2024-11-20 11:48:13.686467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:34:08.169 [2024-11-20 11:48:13.686499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.063 ms 00:34:08.169 [2024-11-20 11:48:13.686509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.169 [2024-11-20 11:48:13.686868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:08.169 [2024-11-20 11:48:13.686917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:34:08.169 [2024-11-20 11:48:13.686958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:34:08.169 [2024-11-20 11:48:13.686979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.169 [2024-11-20 11:48:13.714072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:08.169 [2024-11-20 11:48:13.714288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:34:08.169 [2024-11-20 11:48:13.714331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.071 ms 00:34:08.169 [2024-11-20 11:48:13.714343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.169 [2024-11-20 11:48:13.741097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:08.169 [2024-11-20 11:48:13.741306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:34:08.169 [2024-11-20 11:48:13.741373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.709 ms 00:34:08.169 [2024-11-20 11:48:13.741385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.169 [2024-11-20 11:48:13.767586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:08.169 [2024-11-20 11:48:13.767625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:34:08.169 [2024-11-20 11:48:13.767657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.157 ms 00:34:08.169 [2024-11-20 11:48:13.767666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.169 [2024-11-20 11:48:13.794197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:08.169 [2024-11-20 11:48:13.794239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:34:08.169 [2024-11-20 11:48:13.794271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.444 ms 00:34:08.169 [2024-11-20 11:48:13.794280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.169 [2024-11-20 11:48:13.794337] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:08.169 [2024-11-20 11:48:13.794359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:34:08.169 [2024-11-20 11:48:13.794380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:08.169 [2024-11-20 11:48:13.794392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:08.169 [2024-11-20 11:48:13.794403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:08.169 [2024-11-20 11:48:13.794414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:08.169 [2024-11-20 11:48:13.794425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:08.169 [2024-11-20 11:48:13.794435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:08.169 [2024-11-20 11:48:13.794446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:08.169 [2024-11-20 11:48:13.794457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:08.169 [2024-11-20 11:48:13.794468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.794995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.795006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.795017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.795028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.795039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.795050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.795062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.795072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.795084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.795095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.795107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.795118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.795129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.795140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.795151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.795163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.795177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.795189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.795200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.795212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.795223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.795234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.795245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.795257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.795268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.795279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.795290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.795302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.795313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.795334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.795345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.795357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.795368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.795379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.795390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:08.170 [2024-11-20 11:48:13.795402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:08.171 [2024-11-20 11:48:13.795413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:08.171 [2024-11-20 11:48:13.795425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:08.171 [2024-11-20 11:48:13.795435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:08.171 [2024-11-20 11:48:13.795446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:08.171 [2024-11-20 11:48:13.795457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:08.171 [2024-11-20 11:48:13.795468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:08.171 [2024-11-20 11:48:13.795479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:08.171 [2024-11-20 11:48:13.795490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:08.171 [2024-11-20 11:48:13.795501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:08.171 [2024-11-20 11:48:13.795513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:08.171 [2024-11-20 11:48:13.795524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:08.171 [2024-11-20 11:48:13.795534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:08.171 [2024-11-20 11:48:13.795546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:08.171 [2024-11-20 11:48:13.795558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:08.171 [2024-11-20 11:48:13.795569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:08.171 [2024-11-20 11:48:13.795581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:08.171 [2024-11-20 11:48:13.795611] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:08.171 [2024-11-20 11:48:13.795640] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5519a080-ca77-47db-88d7-988f5a6d6cec 00:34:08.171 [2024-11-20 11:48:13.795652] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:34:08.171 [2024-11-20 11:48:13.795672] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:34:08.171 [2024-11-20 11:48:13.795682] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:34:08.171 [2024-11-20 11:48:13.795693] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:34:08.171 [2024-11-20 11:48:13.795704] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:08.171 [2024-11-20 11:48:13.795715] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:08.171 [2024-11-20 11:48:13.795726] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:08.171 [2024-11-20 11:48:13.795753] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:08.171 [2024-11-20 11:48:13.795763] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:08.171 [2024-11-20 11:48:13.795774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:08.171 [2024-11-20 11:48:13.795786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:08.171 [2024-11-20 11:48:13.795798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.439 ms 00:34:08.171 [2024-11-20 11:48:13.795808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.171 [2024-11-20 11:48:13.812289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:08.171 [2024-11-20 11:48:13.812344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:08.171 [2024-11-20 11:48:13.812375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.414 ms 00:34:08.171 [2024-11-20 11:48:13.812386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.171 [2024-11-20 11:48:13.812928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:08.171 [2024-11-20 11:48:13.812959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:08.171 [2024-11-20 11:48:13.812982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 00:34:08.171 [2024-11-20 11:48:13.812993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.171 [2024-11-20 11:48:13.858146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:08.171 [2024-11-20 11:48:13.858366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:08.171 [2024-11-20 11:48:13.858396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:08.171 [2024-11-20 11:48:13.858409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.171 [2024-11-20 11:48:13.858477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:08.171 [2024-11-20 11:48:13.858493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:08.171 [2024-11-20 11:48:13.858505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:08.171 [2024-11-20 11:48:13.858516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.171 [2024-11-20 11:48:13.858645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:08.171 [2024-11-20 11:48:13.858666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:08.171 [2024-11-20 11:48:13.858679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:08.171 [2024-11-20 11:48:13.858690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.171 [2024-11-20 11:48:13.858714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:08.171 [2024-11-20 11:48:13.858728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:08.171 [2024-11-20 11:48:13.858744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:08.171 [2024-11-20 11:48:13.858754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.430 [2024-11-20 11:48:13.953746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:08.430 [2024-11-20 11:48:13.953823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:08.430 [2024-11-20 11:48:13.953857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:08.430 [2024-11-20 11:48:13.953868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.430 [2024-11-20 11:48:14.028590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:08.430 [2024-11-20 11:48:14.028647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:08.430 [2024-11-20 11:48:14.028696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:08.430 [2024-11-20 11:48:14.028707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.430 [2024-11-20 11:48:14.028806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:08.430 [2024-11-20 11:48:14.028830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:08.430 [2024-11-20 11:48:14.028841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:08.430 [2024-11-20 11:48:14.028851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.430 [2024-11-20 11:48:14.028894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:08.430 [2024-11-20 11:48:14.028909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:08.430 [2024-11-20 11:48:14.028920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:08.430 [2024-11-20 11:48:14.028930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.430 [2024-11-20 11:48:14.029048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:08.430 [2024-11-20 11:48:14.029073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:08.430 [2024-11-20 11:48:14.029085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:08.430 [2024-11-20 11:48:14.029094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.430 [2024-11-20 11:48:14.029141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:08.430 [2024-11-20 11:48:14.029158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:08.430 [2024-11-20 11:48:14.029170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:08.430 [2024-11-20 11:48:14.029180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.430 [2024-11-20 11:48:14.029222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:08.430 [2024-11-20 11:48:14.029236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:08.430 [2024-11-20 11:48:14.029254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:08.430 [2024-11-20 11:48:14.029276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.430 [2024-11-20 11:48:14.029338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:08.430 [2024-11-20 11:48:14.029356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:08.430 [2024-11-20 11:48:14.029367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:08.430 [2024-11-20 11:48:14.029377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.430 [2024-11-20 11:48:14.029521] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 415.139 ms, result 0 00:34:09.365 00:34:09.365 00:34:09.365 11:48:14 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:34:09.365 [2024-11-20 11:48:15.104360] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:34:09.365 [2024-11-20 11:48:15.104832] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79890 ] 00:34:09.623 [2024-11-20 11:48:15.287136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:09.881 [2024-11-20 11:48:15.398412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:10.139 [2024-11-20 11:48:15.719807] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:10.139 [2024-11-20 11:48:15.719903] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:10.139 [2024-11-20 11:48:15.880859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.139 [2024-11-20 11:48:15.880905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:10.139 [2024-11-20 11:48:15.880932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:10.139 [2024-11-20 11:48:15.880944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.139 [2024-11-20 11:48:15.881010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.139 [2024-11-20 11:48:15.881029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:10.139 [2024-11-20 11:48:15.881047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:34:10.139 [2024-11-20 11:48:15.881059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.139 [2024-11-20 11:48:15.881090] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:10.139 [2024-11-20 11:48:15.882044] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:10.139 [2024-11-20 11:48:15.882089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.139 [2024-11-20 11:48:15.882103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:10.139 [2024-11-20 11:48:15.882116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.006 ms 00:34:10.139 [2024-11-20 11:48:15.882127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.139 [2024-11-20 11:48:15.884119] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:34:10.139 [2024-11-20 11:48:15.901424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.139 [2024-11-20 11:48:15.901488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:10.139 [2024-11-20 11:48:15.901508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.306 ms 00:34:10.139 [2024-11-20 11:48:15.901520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.139 [2024-11-20 11:48:15.901624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.139 [2024-11-20 11:48:15.901645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:10.139 [2024-11-20 11:48:15.901658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:34:10.139 [2024-11-20 11:48:15.901669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.397 [2024-11-20 11:48:15.911018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.397 [2024-11-20 11:48:15.911063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:10.397 [2024-11-20 11:48:15.911095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.234 ms 00:34:10.397 [2024-11-20 11:48:15.911105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.397 [2024-11-20 11:48:15.911213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.397 [2024-11-20 11:48:15.911233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:10.397 [2024-11-20 11:48:15.911245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:34:10.397 [2024-11-20 11:48:15.911256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.397 [2024-11-20 11:48:15.911313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.397 [2024-11-20 11:48:15.911330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:10.397 [2024-11-20 11:48:15.911342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:34:10.397 [2024-11-20 11:48:15.911352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.397 [2024-11-20 11:48:15.911385] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:10.397 [2024-11-20 11:48:15.916333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.397 [2024-11-20 11:48:15.916371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:10.397 [2024-11-20 11:48:15.916402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.957 ms 00:34:10.397 [2024-11-20 11:48:15.916418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.397 [2024-11-20 11:48:15.916454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.397 [2024-11-20 11:48:15.916468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:10.397 [2024-11-20 11:48:15.916480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:34:10.397 [2024-11-20 11:48:15.916490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.397 [2024-11-20 11:48:15.916618] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:10.397 [2024-11-20 11:48:15.916660] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:10.398 [2024-11-20 11:48:15.916706] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:10.398 [2024-11-20 11:48:15.916731] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:10.398 [2024-11-20 11:48:15.916844] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:10.398 [2024-11-20 11:48:15.916861] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:10.398 [2024-11-20 11:48:15.916875] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:10.398 [2024-11-20 11:48:15.916890] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:10.398 [2024-11-20 11:48:15.916905] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:10.398 [2024-11-20 11:48:15.916917] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:10.398 [2024-11-20 11:48:15.916959] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:10.398 [2024-11-20 11:48:15.916984] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:10.398 [2024-11-20 11:48:15.916995] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:10.398 [2024-11-20 11:48:15.917011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.398 [2024-11-20 11:48:15.917023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:10.398 [2024-11-20 11:48:15.917034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:34:10.398 [2024-11-20 11:48:15.917044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.398 [2024-11-20 11:48:15.917134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.398 [2024-11-20 11:48:15.917148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:10.398 [2024-11-20 11:48:15.917160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:34:10.398 [2024-11-20 11:48:15.917170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.398 [2024-11-20 11:48:15.917360] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:10.398 [2024-11-20 11:48:15.917392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:10.398 [2024-11-20 11:48:15.917406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:10.398 [2024-11-20 11:48:15.917418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:10.398 [2024-11-20 11:48:15.917430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:10.398 [2024-11-20 11:48:15.917441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:10.398 [2024-11-20 11:48:15.917452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:10.398 [2024-11-20 11:48:15.917464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:10.398 [2024-11-20 11:48:15.917475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:10.398 [2024-11-20 11:48:15.917486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:10.398 [2024-11-20 11:48:15.917497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:10.398 [2024-11-20 11:48:15.917507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:10.398 [2024-11-20 11:48:15.917517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:10.398 [2024-11-20 11:48:15.917528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:10.398 [2024-11-20 11:48:15.917557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:10.398 [2024-11-20 11:48:15.917583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:10.398 [2024-11-20 11:48:15.917594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:10.398 [2024-11-20 11:48:15.917607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:10.398 [2024-11-20 11:48:15.917618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:10.398 [2024-11-20 11:48:15.917630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:10.398 [2024-11-20 11:48:15.917641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:10.398 [2024-11-20 11:48:15.917651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:10.398 [2024-11-20 11:48:15.917661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:10.398 [2024-11-20 11:48:15.917672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:10.398 [2024-11-20 11:48:15.917683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:10.398 [2024-11-20 11:48:15.917694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:10.398 [2024-11-20 11:48:15.917704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:10.398 [2024-11-20 11:48:15.917714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:10.398 [2024-11-20 11:48:15.917724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:10.398 [2024-11-20 11:48:15.917735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:10.398 [2024-11-20 11:48:15.917745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:10.398 [2024-11-20 11:48:15.917755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:10.398 [2024-11-20 11:48:15.917766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:10.398 [2024-11-20 11:48:15.917776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:10.398 [2024-11-20 11:48:15.917786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:10.398 [2024-11-20 11:48:15.917798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:10.398 [2024-11-20 11:48:15.917809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:10.398 [2024-11-20 11:48:15.917819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:10.398 [2024-11-20 11:48:15.917830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:10.398 [2024-11-20 11:48:15.917841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:10.398 [2024-11-20 11:48:15.917852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:10.398 [2024-11-20 11:48:15.917862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:10.398 [2024-11-20 11:48:15.917873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:10.398 [2024-11-20 11:48:15.917884] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:10.398 [2024-11-20 11:48:15.917896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:10.398 [2024-11-20 11:48:15.917907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:10.398 [2024-11-20 11:48:15.917919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:10.398 [2024-11-20 11:48:15.917930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:10.398 [2024-11-20 11:48:15.917942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:10.398 [2024-11-20 11:48:15.917953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:10.398 [2024-11-20 11:48:15.917964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:10.398 [2024-11-20 11:48:15.917975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:10.398 [2024-11-20 11:48:15.917986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:10.398 [2024-11-20 11:48:15.917998] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:10.398 [2024-11-20 11:48:15.918012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:10.398 [2024-11-20 11:48:15.918025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:10.398 [2024-11-20 11:48:15.918037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:10.398 [2024-11-20 11:48:15.918049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:10.398 [2024-11-20 11:48:15.918060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:10.398 [2024-11-20 11:48:15.918071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:10.398 [2024-11-20 11:48:15.918082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:10.398 [2024-11-20 11:48:15.918093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:10.398 [2024-11-20 11:48:15.918104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:10.398 [2024-11-20 11:48:15.918115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:10.398 [2024-11-20 11:48:15.918126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:10.399 [2024-11-20 11:48:15.918137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:10.399 [2024-11-20 11:48:15.918147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:10.399 [2024-11-20 11:48:15.918158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:10.399 [2024-11-20 11:48:15.918169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:10.399 [2024-11-20 11:48:15.918181] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:10.399 [2024-11-20 11:48:15.918199] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:10.399 [2024-11-20 11:48:15.918212] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:10.399 [2024-11-20 11:48:15.918223] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:10.399 [2024-11-20 11:48:15.918235] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:10.399 [2024-11-20 11:48:15.918247] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:10.399 [2024-11-20 11:48:15.918259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.399 [2024-11-20 11:48:15.918270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:10.399 [2024-11-20 11:48:15.918282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.040 ms 00:34:10.399 [2024-11-20 11:48:15.918293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.399 [2024-11-20 11:48:15.956076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.399 [2024-11-20 11:48:15.956143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:10.399 [2024-11-20 11:48:15.956179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.718 ms 00:34:10.399 [2024-11-20 11:48:15.956191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.399 [2024-11-20 11:48:15.956310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.399 [2024-11-20 11:48:15.956326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:10.399 [2024-11-20 11:48:15.956338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:34:10.399 [2024-11-20 11:48:15.956348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.399 [2024-11-20 11:48:16.005056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.399 [2024-11-20 11:48:16.005336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:10.399 [2024-11-20 11:48:16.005368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.621 ms 00:34:10.399 [2024-11-20 11:48:16.005381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.399 [2024-11-20 11:48:16.005447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.399 [2024-11-20 11:48:16.005465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:10.399 [2024-11-20 11:48:16.005480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:10.399 [2024-11-20 11:48:16.005500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.399 [2024-11-20 11:48:16.006244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.399 [2024-11-20 11:48:16.006282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:10.399 [2024-11-20 11:48:16.006298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.619 ms 00:34:10.399 [2024-11-20 11:48:16.006310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.399 [2024-11-20 11:48:16.006483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.399 [2024-11-20 11:48:16.006503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:10.399 [2024-11-20 11:48:16.006516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:34:10.399 [2024-11-20 11:48:16.006553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.399 [2024-11-20 11:48:16.024385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.399 [2024-11-20 11:48:16.024636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:10.399 [2024-11-20 11:48:16.024684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.800 ms 00:34:10.399 [2024-11-20 11:48:16.024698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.399 [2024-11-20 11:48:16.040446] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:34:10.399 [2024-11-20 11:48:16.040487] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:10.399 [2024-11-20 11:48:16.040522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.399 [2024-11-20 11:48:16.040534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:10.399 [2024-11-20 11:48:16.040559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.688 ms 00:34:10.399 [2024-11-20 11:48:16.040574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.399 [2024-11-20 11:48:16.067074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.399 [2024-11-20 11:48:16.067123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:10.399 [2024-11-20 11:48:16.067156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.439 ms 00:34:10.399 [2024-11-20 11:48:16.067167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.399 [2024-11-20 11:48:16.081278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.399 [2024-11-20 11:48:16.081321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:10.399 [2024-11-20 11:48:16.081353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.065 ms 00:34:10.399 [2024-11-20 11:48:16.081363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.399 [2024-11-20 11:48:16.095527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.399 [2024-11-20 11:48:16.095577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:10.399 [2024-11-20 11:48:16.095609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.122 ms 00:34:10.399 [2024-11-20 11:48:16.095619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.399 [2024-11-20 11:48:16.096445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.399 [2024-11-20 11:48:16.096638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:10.399 [2024-11-20 11:48:16.096667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.680 ms 00:34:10.399 [2024-11-20 11:48:16.096688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.658 [2024-11-20 11:48:16.169200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.658 [2024-11-20 11:48:16.169301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:10.658 [2024-11-20 11:48:16.169346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.473 ms 00:34:10.658 [2024-11-20 11:48:16.169358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.658 [2024-11-20 11:48:16.180804] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:10.658 [2024-11-20 11:48:16.183254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.658 [2024-11-20 11:48:16.183443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:10.658 [2024-11-20 11:48:16.183487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.836 ms 00:34:10.658 [2024-11-20 11:48:16.183500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.658 [2024-11-20 11:48:16.183646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.658 [2024-11-20 11:48:16.183670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:10.658 [2024-11-20 11:48:16.183684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:34:10.658 [2024-11-20 11:48:16.183701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.658 [2024-11-20 11:48:16.183817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.658 [2024-11-20 11:48:16.183836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:10.658 [2024-11-20 11:48:16.183849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:34:10.658 [2024-11-20 11:48:16.183860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.658 [2024-11-20 11:48:16.183892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.658 [2024-11-20 11:48:16.183921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:10.658 [2024-11-20 11:48:16.183933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:10.658 [2024-11-20 11:48:16.183944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.658 [2024-11-20 11:48:16.184030] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:10.658 [2024-11-20 11:48:16.184052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.658 [2024-11-20 11:48:16.184063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:10.658 [2024-11-20 11:48:16.184074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:34:10.658 [2024-11-20 11:48:16.184084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.658 [2024-11-20 11:48:16.211795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.658 [2024-11-20 11:48:16.212026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:10.658 [2024-11-20 11:48:16.212055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.682 ms 00:34:10.658 [2024-11-20 11:48:16.212077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.658 [2024-11-20 11:48:16.212179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.658 [2024-11-20 11:48:16.212199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:10.658 [2024-11-20 11:48:16.212212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:34:10.658 [2024-11-20 11:48:16.212223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.658 [2024-11-20 11:48:16.213829] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 332.372 ms, result 0 00:34:12.033  [2024-11-20T11:48:18.734Z] Copying: 24/1024 [MB] (24 MBps) [2024-11-20T11:48:19.669Z] Copying: 48/1024 [MB] (24 MBps) [2024-11-20T11:48:20.603Z] Copying: 73/1024 [MB] (25 MBps) [2024-11-20T11:48:21.537Z] Copying: 99/1024 [MB] (25 MBps) [2024-11-20T11:48:22.469Z] Copying: 123/1024 [MB] (24 MBps) [2024-11-20T11:48:23.846Z] Copying: 148/1024 [MB] (24 MBps) [2024-11-20T11:48:24.412Z] Copying: 172/1024 [MB] (24 MBps) [2024-11-20T11:48:25.786Z] Copying: 197/1024 [MB] (24 MBps) [2024-11-20T11:48:26.720Z] Copying: 221/1024 [MB] (24 MBps) [2024-11-20T11:48:27.655Z] Copying: 245/1024 [MB] (23 MBps) [2024-11-20T11:48:28.601Z] Copying: 268/1024 [MB] (23 MBps) [2024-11-20T11:48:29.569Z] Copying: 292/1024 [MB] (23 MBps) [2024-11-20T11:48:30.505Z] Copying: 316/1024 [MB] (24 MBps) [2024-11-20T11:48:31.439Z] Copying: 340/1024 [MB] (24 MBps) [2024-11-20T11:48:32.815Z] Copying: 365/1024 [MB] (24 MBps) [2024-11-20T11:48:33.750Z] Copying: 391/1024 [MB] (25 MBps) [2024-11-20T11:48:34.686Z] Copying: 417/1024 [MB] (25 MBps) [2024-11-20T11:48:35.624Z] Copying: 442/1024 [MB] (25 MBps) [2024-11-20T11:48:36.559Z] Copying: 467/1024 [MB] (24 MBps) [2024-11-20T11:48:37.495Z] Copying: 492/1024 [MB] (25 MBps) [2024-11-20T11:48:38.430Z] Copying: 518/1024 [MB] (25 MBps) [2024-11-20T11:48:39.805Z] Copying: 544/1024 [MB] (26 MBps) [2024-11-20T11:48:40.744Z] Copying: 569/1024 [MB] (25 MBps) [2024-11-20T11:48:41.681Z] Copying: 595/1024 [MB] (25 MBps) [2024-11-20T11:48:42.618Z] Copying: 620/1024 [MB] (25 MBps) [2024-11-20T11:48:43.554Z] Copying: 647/1024 [MB] (26 MBps) [2024-11-20T11:48:44.499Z] Copying: 673/1024 [MB] (26 MBps) [2024-11-20T11:48:45.453Z] Copying: 700/1024 [MB] (26 MBps) [2024-11-20T11:48:46.830Z] Copying: 726/1024 [MB] (25 MBps) [2024-11-20T11:48:47.767Z] Copying: 752/1024 [MB] (25 MBps) [2024-11-20T11:48:48.705Z] Copying: 778/1024 [MB] (26 MBps) [2024-11-20T11:48:49.641Z] Copying: 802/1024 [MB] (24 MBps) [2024-11-20T11:48:50.577Z] Copying: 827/1024 [MB] (25 MBps) [2024-11-20T11:48:51.513Z] Copying: 852/1024 [MB] (25 MBps) [2024-11-20T11:48:52.450Z] Copying: 878/1024 [MB] (25 MBps) [2024-11-20T11:48:53.827Z] Copying: 903/1024 [MB] (25 MBps) [2024-11-20T11:48:54.763Z] Copying: 929/1024 [MB] (25 MBps) [2024-11-20T11:48:55.699Z] Copying: 954/1024 [MB] (25 MBps) [2024-11-20T11:48:56.636Z] Copying: 979/1024 [MB] (25 MBps) [2024-11-20T11:48:57.582Z] Copying: 1003/1024 [MB] (23 MBps) [2024-11-20T11:48:57.842Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-11-20 11:48:57.658962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.076 [2024-11-20 11:48:57.659072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:52.076 [2024-11-20 11:48:57.659100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:52.076 [2024-11-20 11:48:57.659117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.076 [2024-11-20 11:48:57.659165] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:52.076 [2024-11-20 11:48:57.665384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.076 [2024-11-20 11:48:57.665432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:52.076 [2024-11-20 11:48:57.665464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.190 ms 00:34:52.076 [2024-11-20 11:48:57.665480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.076 [2024-11-20 11:48:57.665993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.076 [2024-11-20 11:48:57.666042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:52.076 [2024-11-20 11:48:57.666064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.472 ms 00:34:52.076 [2024-11-20 11:48:57.666079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.076 [2024-11-20 11:48:57.671242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.076 [2024-11-20 11:48:57.671286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:34:52.076 [2024-11-20 11:48:57.671306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.136 ms 00:34:52.076 [2024-11-20 11:48:57.671321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.076 [2024-11-20 11:48:57.681312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.076 [2024-11-20 11:48:57.681370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:34:52.076 [2024-11-20 11:48:57.681392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.950 ms 00:34:52.076 [2024-11-20 11:48:57.681407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.076 [2024-11-20 11:48:57.712658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.076 [2024-11-20 11:48:57.712704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:34:52.076 [2024-11-20 11:48:57.712738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.089 ms 00:34:52.076 [2024-11-20 11:48:57.712749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.076 [2024-11-20 11:48:57.728883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.076 [2024-11-20 11:48:57.728925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:34:52.076 [2024-11-20 11:48:57.728959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.106 ms 00:34:52.076 [2024-11-20 11:48:57.728985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.076 [2024-11-20 11:48:57.729141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.076 [2024-11-20 11:48:57.729197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:34:52.076 [2024-11-20 11:48:57.729211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:34:52.076 [2024-11-20 11:48:57.729221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.076 [2024-11-20 11:48:57.757930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.076 [2024-11-20 11:48:57.757969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:34:52.076 [2024-11-20 11:48:57.758001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.689 ms 00:34:52.076 [2024-11-20 11:48:57.758011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.076 [2024-11-20 11:48:57.786940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.076 [2024-11-20 11:48:57.786990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:34:52.076 [2024-11-20 11:48:57.787037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.905 ms 00:34:52.076 [2024-11-20 11:48:57.787048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.076 [2024-11-20 11:48:57.818422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.076 [2024-11-20 11:48:57.818638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:34:52.076 [2024-11-20 11:48:57.818666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.350 ms 00:34:52.076 [2024-11-20 11:48:57.818678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.336 [2024-11-20 11:48:57.846380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.336 [2024-11-20 11:48:57.846601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:34:52.336 [2024-11-20 11:48:57.846646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.621 ms 00:34:52.336 [2024-11-20 11:48:57.846658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.336 [2024-11-20 11:48:57.846685] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:52.336 [2024-11-20 11:48:57.846715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.846739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.846751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.846763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.846775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.846787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.846799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.846811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.846823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.846842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.846854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.846866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.846877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.846904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.846915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.846927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.846952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.846963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.846974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.846984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.846994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.847005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.847015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.847041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.847066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.847076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.847086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.847097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.847108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.847118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.847128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.847138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.847149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.847159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.847169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.847181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.847193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.847204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.847214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.847224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.847235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.847245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.847256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:52.336 [2024-11-20 11:48:57.847266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:52.337 [2024-11-20 11:48:57.847895] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:52.337 [2024-11-20 11:48:57.847910] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5519a080-ca77-47db-88d7-988f5a6d6cec 00:34:52.337 [2024-11-20 11:48:57.847921] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:34:52.337 [2024-11-20 11:48:57.847931] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:34:52.337 [2024-11-20 11:48:57.847941] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:34:52.337 [2024-11-20 11:48:57.847951] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:34:52.337 [2024-11-20 11:48:57.847960] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:52.337 [2024-11-20 11:48:57.847971] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:52.337 [2024-11-20 11:48:57.847992] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:52.337 [2024-11-20 11:48:57.848001] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:52.337 [2024-11-20 11:48:57.848010] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:52.337 [2024-11-20 11:48:57.848020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.337 [2024-11-20 11:48:57.848030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:52.337 [2024-11-20 11:48:57.848042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.336 ms 00:34:52.337 [2024-11-20 11:48:57.848051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.337 [2024-11-20 11:48:57.863765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.337 [2024-11-20 11:48:57.863802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:52.337 [2024-11-20 11:48:57.863834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.669 ms 00:34:52.337 [2024-11-20 11:48:57.863844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.337 [2024-11-20 11:48:57.864310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.337 [2024-11-20 11:48:57.864349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:52.337 [2024-11-20 11:48:57.864364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:34:52.337 [2024-11-20 11:48:57.864384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.337 [2024-11-20 11:48:57.908583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:52.337 [2024-11-20 11:48:57.908643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:52.338 [2024-11-20 11:48:57.908693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:52.338 [2024-11-20 11:48:57.908706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.338 [2024-11-20 11:48:57.908796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:52.338 [2024-11-20 11:48:57.908811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:52.338 [2024-11-20 11:48:57.908824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:52.338 [2024-11-20 11:48:57.908842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.338 [2024-11-20 11:48:57.908962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:52.338 [2024-11-20 11:48:57.908982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:52.338 [2024-11-20 11:48:57.908995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:52.338 [2024-11-20 11:48:57.909006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.338 [2024-11-20 11:48:57.909028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:52.338 [2024-11-20 11:48:57.909041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:52.338 [2024-11-20 11:48:57.909053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:52.338 [2024-11-20 11:48:57.909064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.338 [2024-11-20 11:48:58.012938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:52.338 [2024-11-20 11:48:58.012999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:52.338 [2024-11-20 11:48:58.013035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:52.338 [2024-11-20 11:48:58.013058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.338 [2024-11-20 11:48:58.096455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:52.338 [2024-11-20 11:48:58.096592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:52.338 [2024-11-20 11:48:58.096612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:52.338 [2024-11-20 11:48:58.096624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.338 [2024-11-20 11:48:58.096749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:52.338 [2024-11-20 11:48:58.096765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:52.338 [2024-11-20 11:48:58.096778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:52.338 [2024-11-20 11:48:58.096788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.338 [2024-11-20 11:48:58.096835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:52.338 [2024-11-20 11:48:58.096865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:52.338 [2024-11-20 11:48:58.096893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:52.338 [2024-11-20 11:48:58.096912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.338 [2024-11-20 11:48:58.097050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:52.338 [2024-11-20 11:48:58.097069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:52.338 [2024-11-20 11:48:58.097087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:52.338 [2024-11-20 11:48:58.097098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.338 [2024-11-20 11:48:58.097159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:52.338 [2024-11-20 11:48:58.097184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:52.338 [2024-11-20 11:48:58.097198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:52.338 [2024-11-20 11:48:58.097208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.338 [2024-11-20 11:48:58.097254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:52.338 [2024-11-20 11:48:58.097276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:52.338 [2024-11-20 11:48:58.097302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:52.338 [2024-11-20 11:48:58.097313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.338 [2024-11-20 11:48:58.097367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:52.338 [2024-11-20 11:48:58.097383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:52.338 [2024-11-20 11:48:58.097395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:52.338 [2024-11-20 11:48:58.097406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.338 [2024-11-20 11:48:58.097587] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 438.565 ms, result 0 00:34:53.275 00:34:53.275 00:34:53.275 11:48:59 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:55.806 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:34:55.806 11:49:00 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:34:55.806 [2024-11-20 11:49:01.084523] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:34:55.806 [2024-11-20 11:49:01.084758] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80344 ] 00:34:55.806 [2024-11-20 11:49:01.264737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:55.806 [2024-11-20 11:49:01.414350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:56.065 [2024-11-20 11:49:01.753878] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:56.065 [2024-11-20 11:49:01.753962] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:56.324 [2024-11-20 11:49:01.916867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.324 [2024-11-20 11:49:01.916962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:56.324 [2024-11-20 11:49:01.917004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:56.324 [2024-11-20 11:49:01.917015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.324 [2024-11-20 11:49:01.917072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.324 [2024-11-20 11:49:01.917089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:56.324 [2024-11-20 11:49:01.917105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:34:56.325 [2024-11-20 11:49:01.917116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.325 [2024-11-20 11:49:01.917142] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:56.325 [2024-11-20 11:49:01.918198] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:56.325 [2024-11-20 11:49:01.918479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.325 [2024-11-20 11:49:01.918513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:56.325 [2024-11-20 11:49:01.918538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.324 ms 00:34:56.325 [2024-11-20 11:49:01.918594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.325 [2024-11-20 11:49:01.920767] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:34:56.325 [2024-11-20 11:49:01.936904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.325 [2024-11-20 11:49:01.936945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:56.325 [2024-11-20 11:49:01.936978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.138 ms 00:34:56.325 [2024-11-20 11:49:01.936989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.325 [2024-11-20 11:49:01.937059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.325 [2024-11-20 11:49:01.937078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:56.325 [2024-11-20 11:49:01.937090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:34:56.325 [2024-11-20 11:49:01.937100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.325 [2024-11-20 11:49:01.946212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.325 [2024-11-20 11:49:01.946433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:56.325 [2024-11-20 11:49:01.946474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.026 ms 00:34:56.325 [2024-11-20 11:49:01.946499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.325 [2024-11-20 11:49:01.946680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.325 [2024-11-20 11:49:01.946704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:56.325 [2024-11-20 11:49:01.946732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:34:56.325 [2024-11-20 11:49:01.946758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.325 [2024-11-20 11:49:01.946814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.325 [2024-11-20 11:49:01.946831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:56.325 [2024-11-20 11:49:01.946843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:34:56.325 [2024-11-20 11:49:01.946854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.325 [2024-11-20 11:49:01.946886] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:56.325 [2024-11-20 11:49:01.951668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.325 [2024-11-20 11:49:01.951703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:56.325 [2024-11-20 11:49:01.951735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.790 ms 00:34:56.325 [2024-11-20 11:49:01.951751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.325 [2024-11-20 11:49:01.951787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.325 [2024-11-20 11:49:01.951801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:56.325 [2024-11-20 11:49:01.951812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:34:56.325 [2024-11-20 11:49:01.951822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.325 [2024-11-20 11:49:01.951883] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:56.325 [2024-11-20 11:49:01.951927] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:56.325 [2024-11-20 11:49:01.951965] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:56.325 [2024-11-20 11:49:01.951986] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:56.325 [2024-11-20 11:49:01.952083] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:56.325 [2024-11-20 11:49:01.952098] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:56.325 [2024-11-20 11:49:01.952111] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:56.325 [2024-11-20 11:49:01.952124] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:56.325 [2024-11-20 11:49:01.952136] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:56.325 [2024-11-20 11:49:01.952147] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:56.325 [2024-11-20 11:49:01.952157] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:56.325 [2024-11-20 11:49:01.952167] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:56.325 [2024-11-20 11:49:01.952177] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:56.325 [2024-11-20 11:49:01.952193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.325 [2024-11-20 11:49:01.952203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:56.325 [2024-11-20 11:49:01.952214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:34:56.325 [2024-11-20 11:49:01.952224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.325 [2024-11-20 11:49:01.952307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.325 [2024-11-20 11:49:01.952338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:56.325 [2024-11-20 11:49:01.952349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:34:56.325 [2024-11-20 11:49:01.952359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.325 [2024-11-20 11:49:01.952478] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:56.325 [2024-11-20 11:49:01.952502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:56.325 [2024-11-20 11:49:01.952514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:56.325 [2024-11-20 11:49:01.952525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:56.325 [2024-11-20 11:49:01.952577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:56.325 [2024-11-20 11:49:01.952589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:56.325 [2024-11-20 11:49:01.952600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:56.325 [2024-11-20 11:49:01.952612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:56.325 [2024-11-20 11:49:01.952636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:56.325 [2024-11-20 11:49:01.952646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:56.325 [2024-11-20 11:49:01.952671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:56.325 [2024-11-20 11:49:01.952699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:56.325 [2024-11-20 11:49:01.952708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:56.325 [2024-11-20 11:49:01.952718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:56.325 [2024-11-20 11:49:01.952729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:56.325 [2024-11-20 11:49:01.952750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:56.325 [2024-11-20 11:49:01.952761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:56.325 [2024-11-20 11:49:01.952772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:56.325 [2024-11-20 11:49:01.952782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:56.325 [2024-11-20 11:49:01.952793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:56.325 [2024-11-20 11:49:01.952803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:56.325 [2024-11-20 11:49:01.952813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:56.325 [2024-11-20 11:49:01.952823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:56.325 [2024-11-20 11:49:01.952832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:56.325 [2024-11-20 11:49:01.952841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:56.325 [2024-11-20 11:49:01.952850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:56.325 [2024-11-20 11:49:01.952859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:56.325 [2024-11-20 11:49:01.952884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:56.325 [2024-11-20 11:49:01.952911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:56.325 [2024-11-20 11:49:01.952921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:56.325 [2024-11-20 11:49:01.952931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:56.325 [2024-11-20 11:49:01.952948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:56.325 [2024-11-20 11:49:01.952967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:56.325 [2024-11-20 11:49:01.952995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:56.325 [2024-11-20 11:49:01.953007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:56.325 [2024-11-20 11:49:01.953025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:56.325 [2024-11-20 11:49:01.953044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:56.325 [2024-11-20 11:49:01.953065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:56.325 [2024-11-20 11:49:01.953086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:56.325 [2024-11-20 11:49:01.953113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:56.325 [2024-11-20 11:49:01.953131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:56.325 [2024-11-20 11:49:01.953151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:56.325 [2024-11-20 11:49:01.953168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:56.325 [2024-11-20 11:49:01.953187] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:56.325 [2024-11-20 11:49:01.953208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:56.325 [2024-11-20 11:49:01.953226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:56.326 [2024-11-20 11:49:01.953244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:56.326 [2024-11-20 11:49:01.953263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:56.326 [2024-11-20 11:49:01.953310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:56.326 [2024-11-20 11:49:01.953335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:56.326 [2024-11-20 11:49:01.953357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:56.326 [2024-11-20 11:49:01.953377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:56.326 [2024-11-20 11:49:01.953397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:56.326 [2024-11-20 11:49:01.953418] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:56.326 [2024-11-20 11:49:01.953435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:56.326 [2024-11-20 11:49:01.953455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:56.326 [2024-11-20 11:49:01.953478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:56.326 [2024-11-20 11:49:01.953502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:56.326 [2024-11-20 11:49:01.953549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:56.326 [2024-11-20 11:49:01.953578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:56.326 [2024-11-20 11:49:01.953600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:56.326 [2024-11-20 11:49:01.953636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:56.326 [2024-11-20 11:49:01.953656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:56.326 [2024-11-20 11:49:01.953688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:56.326 [2024-11-20 11:49:01.953700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:56.326 [2024-11-20 11:49:01.953715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:56.326 [2024-11-20 11:49:01.953731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:56.326 [2024-11-20 11:49:01.953757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:56.326 [2024-11-20 11:49:01.953778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:56.326 [2024-11-20 11:49:01.953801] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:56.326 [2024-11-20 11:49:01.953841] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:56.326 [2024-11-20 11:49:01.953865] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:56.326 [2024-11-20 11:49:01.953886] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:56.326 [2024-11-20 11:49:01.953907] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:56.326 [2024-11-20 11:49:01.953926] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:56.326 [2024-11-20 11:49:01.953944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.326 [2024-11-20 11:49:01.953959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:56.326 [2024-11-20 11:49:01.953976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.526 ms 00:34:56.326 [2024-11-20 11:49:01.954019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.326 [2024-11-20 11:49:01.991460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.326 [2024-11-20 11:49:01.991516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:56.326 [2024-11-20 11:49:01.991563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.335 ms 00:34:56.326 [2024-11-20 11:49:01.991577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.326 [2024-11-20 11:49:01.991718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.326 [2024-11-20 11:49:01.991734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:56.326 [2024-11-20 11:49:01.991746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:34:56.326 [2024-11-20 11:49:01.991756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.326 [2024-11-20 11:49:02.046176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.326 [2024-11-20 11:49:02.046432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:56.326 [2024-11-20 11:49:02.046471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.334 ms 00:34:56.326 [2024-11-20 11:49:02.046496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.326 [2024-11-20 11:49:02.046605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.326 [2024-11-20 11:49:02.046638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:56.326 [2024-11-20 11:49:02.046665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:34:56.326 [2024-11-20 11:49:02.046701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.326 [2024-11-20 11:49:02.047407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.326 [2024-11-20 11:49:02.047465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:56.326 [2024-11-20 11:49:02.047492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.590 ms 00:34:56.326 [2024-11-20 11:49:02.047514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.326 [2024-11-20 11:49:02.047762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.326 [2024-11-20 11:49:02.047783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:56.326 [2024-11-20 11:49:02.047796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:34:56.326 [2024-11-20 11:49:02.047828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.326 [2024-11-20 11:49:02.066350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.326 [2024-11-20 11:49:02.066391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:56.326 [2024-11-20 11:49:02.066429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.495 ms 00:34:56.326 [2024-11-20 11:49:02.066441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.326 [2024-11-20 11:49:02.081897] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:34:56.326 [2024-11-20 11:49:02.081939] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:56.326 [2024-11-20 11:49:02.081973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.326 [2024-11-20 11:49:02.081985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:56.326 [2024-11-20 11:49:02.081997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.361 ms 00:34:56.326 [2024-11-20 11:49:02.082007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.586 [2024-11-20 11:49:02.108755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.586 [2024-11-20 11:49:02.108802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:56.586 [2024-11-20 11:49:02.108835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.706 ms 00:34:56.586 [2024-11-20 11:49:02.108846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.586 [2024-11-20 11:49:02.124356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.586 [2024-11-20 11:49:02.124399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:56.586 [2024-11-20 11:49:02.124432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.468 ms 00:34:56.586 [2024-11-20 11:49:02.124444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.586 [2024-11-20 11:49:02.139429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.586 [2024-11-20 11:49:02.139484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:56.586 [2024-11-20 11:49:02.139517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.927 ms 00:34:56.586 [2024-11-20 11:49:02.139528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.586 [2024-11-20 11:49:02.140585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.586 [2024-11-20 11:49:02.140659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:56.586 [2024-11-20 11:49:02.140677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.903 ms 00:34:56.586 [2024-11-20 11:49:02.140708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.586 [2024-11-20 11:49:02.216092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.586 [2024-11-20 11:49:02.216180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:56.586 [2024-11-20 11:49:02.216224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.356 ms 00:34:56.586 [2024-11-20 11:49:02.216235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.586 [2024-11-20 11:49:02.227882] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:56.586 [2024-11-20 11:49:02.230504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.586 [2024-11-20 11:49:02.230751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:56.586 [2024-11-20 11:49:02.230789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.207 ms 00:34:56.586 [2024-11-20 11:49:02.230817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.586 [2024-11-20 11:49:02.230995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.586 [2024-11-20 11:49:02.231025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:56.586 [2024-11-20 11:49:02.231069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:34:56.586 [2024-11-20 11:49:02.231086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.586 [2024-11-20 11:49:02.231197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.586 [2024-11-20 11:49:02.231216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:56.586 [2024-11-20 11:49:02.231228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:34:56.586 [2024-11-20 11:49:02.231239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.586 [2024-11-20 11:49:02.231269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.586 [2024-11-20 11:49:02.231283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:56.586 [2024-11-20 11:49:02.231295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:56.586 [2024-11-20 11:49:02.231348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.586 [2024-11-20 11:49:02.231403] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:56.586 [2024-11-20 11:49:02.231425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.586 [2024-11-20 11:49:02.231436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:56.586 [2024-11-20 11:49:02.231448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:34:56.586 [2024-11-20 11:49:02.231475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.586 [2024-11-20 11:49:02.261432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.586 [2024-11-20 11:49:02.261477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:56.586 [2024-11-20 11:49:02.261512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.927 ms 00:34:56.586 [2024-11-20 11:49:02.261530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.586 [2024-11-20 11:49:02.261654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.586 [2024-11-20 11:49:02.261674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:56.586 [2024-11-20 11:49:02.261703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:34:56.586 [2024-11-20 11:49:02.261714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.586 [2024-11-20 11:49:02.263510] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 346.059 ms, result 0 00:34:57.556  [2024-11-20T11:49:04.715Z] Copying: 23/1024 [MB] (23 MBps) [2024-11-20T11:49:05.282Z] Copying: 45/1024 [MB] (22 MBps) [2024-11-20T11:49:06.659Z] Copying: 68/1024 [MB] (22 MBps) [2024-11-20T11:49:07.596Z] Copying: 91/1024 [MB] (23 MBps) [2024-11-20T11:49:08.533Z] Copying: 115/1024 [MB] (23 MBps) [2024-11-20T11:49:09.469Z] Copying: 139/1024 [MB] (24 MBps) [2024-11-20T11:49:10.405Z] Copying: 162/1024 [MB] (22 MBps) [2024-11-20T11:49:11.341Z] Copying: 186/1024 [MB] (23 MBps) [2024-11-20T11:49:12.276Z] Copying: 210/1024 [MB] (23 MBps) [2024-11-20T11:49:13.654Z] Copying: 234/1024 [MB] (23 MBps) [2024-11-20T11:49:14.591Z] Copying: 257/1024 [MB] (23 MBps) [2024-11-20T11:49:15.528Z] Copying: 281/1024 [MB] (23 MBps) [2024-11-20T11:49:16.464Z] Copying: 304/1024 [MB] (23 MBps) [2024-11-20T11:49:17.400Z] Copying: 328/1024 [MB] (23 MBps) [2024-11-20T11:49:18.335Z] Copying: 352/1024 [MB] (23 MBps) [2024-11-20T11:49:19.731Z] Copying: 374/1024 [MB] (22 MBps) [2024-11-20T11:49:20.310Z] Copying: 397/1024 [MB] (22 MBps) [2024-11-20T11:49:21.687Z] Copying: 421/1024 [MB] (23 MBps) [2024-11-20T11:49:22.624Z] Copying: 445/1024 [MB] (24 MBps) [2024-11-20T11:49:23.559Z] Copying: 470/1024 [MB] (24 MBps) [2024-11-20T11:49:24.495Z] Copying: 488/1024 [MB] (18 MBps) [2024-11-20T11:49:25.432Z] Copying: 510/1024 [MB] (21 MBps) [2024-11-20T11:49:26.370Z] Copying: 532/1024 [MB] (21 MBps) [2024-11-20T11:49:27.306Z] Copying: 553/1024 [MB] (21 MBps) [2024-11-20T11:49:28.683Z] Copying: 577/1024 [MB] (23 MBps) [2024-11-20T11:49:29.619Z] Copying: 601/1024 [MB] (23 MBps) [2024-11-20T11:49:30.555Z] Copying: 624/1024 [MB] (22 MBps) [2024-11-20T11:49:31.492Z] Copying: 647/1024 [MB] (23 MBps) [2024-11-20T11:49:32.428Z] Copying: 671/1024 [MB] (23 MBps) [2024-11-20T11:49:33.365Z] Copying: 694/1024 [MB] (23 MBps) [2024-11-20T11:49:34.302Z] Copying: 719/1024 [MB] (25 MBps) [2024-11-20T11:49:35.682Z] Copying: 744/1024 [MB] (25 MBps) [2024-11-20T11:49:36.634Z] Copying: 769/1024 [MB] (24 MBps) [2024-11-20T11:49:37.573Z] Copying: 793/1024 [MB] (24 MBps) [2024-11-20T11:49:38.510Z] Copying: 818/1024 [MB] (25 MBps) [2024-11-20T11:49:39.446Z] Copying: 842/1024 [MB] (23 MBps) [2024-11-20T11:49:40.383Z] Copying: 866/1024 [MB] (23 MBps) [2024-11-20T11:49:41.320Z] Copying: 890/1024 [MB] (24 MBps) [2024-11-20T11:49:42.699Z] Copying: 915/1024 [MB] (24 MBps) [2024-11-20T11:49:43.636Z] Copying: 939/1024 [MB] (24 MBps) [2024-11-20T11:49:44.574Z] Copying: 964/1024 [MB] (24 MBps) [2024-11-20T11:49:45.509Z] Copying: 988/1024 [MB] (23 MBps) [2024-11-20T11:49:46.445Z] Copying: 1012/1024 [MB] (23 MBps) [2024-11-20T11:49:47.013Z] Copying: 1023/1024 [MB] (11 MBps) [2024-11-20T11:49:47.013Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-11-20 11:49:46.852442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:41.247 [2024-11-20 11:49:46.852715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:41.247 [2024-11-20 11:49:46.852848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:41.247 [2024-11-20 11:49:46.852987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.247 [2024-11-20 11:49:46.855262] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:41.247 [2024-11-20 11:49:46.862173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:41.247 [2024-11-20 11:49:46.862373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:41.247 [2024-11-20 11:49:46.862414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.683 ms 00:35:41.247 [2024-11-20 11:49:46.862427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.247 [2024-11-20 11:49:46.875041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:41.247 [2024-11-20 11:49:46.875095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:41.247 [2024-11-20 11:49:46.875110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.439 ms 00:35:41.247 [2024-11-20 11:49:46.875121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.247 [2024-11-20 11:49:46.898370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:41.247 [2024-11-20 11:49:46.898427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:35:41.247 [2024-11-20 11:49:46.898444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.220 ms 00:35:41.247 [2024-11-20 11:49:46.898457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.247 [2024-11-20 11:49:46.905200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:41.247 [2024-11-20 11:49:46.905244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:35:41.247 [2024-11-20 11:49:46.905257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.705 ms 00:35:41.247 [2024-11-20 11:49:46.905268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.247 [2024-11-20 11:49:46.935546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:41.247 [2024-11-20 11:49:46.935609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:35:41.247 [2024-11-20 11:49:46.935625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.172 ms 00:35:41.247 [2024-11-20 11:49:46.935636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.247 [2024-11-20 11:49:46.952621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:41.247 [2024-11-20 11:49:46.952689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:35:41.247 [2024-11-20 11:49:46.952706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.945 ms 00:35:41.247 [2024-11-20 11:49:46.952718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.507 [2024-11-20 11:49:47.063225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:41.507 [2024-11-20 11:49:47.063287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:35:41.507 [2024-11-20 11:49:47.063304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 110.462 ms 00:35:41.507 [2024-11-20 11:49:47.063322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.507 [2024-11-20 11:49:47.092487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:41.507 [2024-11-20 11:49:47.092539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:35:41.507 [2024-11-20 11:49:47.092569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.144 ms 00:35:41.507 [2024-11-20 11:49:47.092582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.507 [2024-11-20 11:49:47.120646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:41.507 [2024-11-20 11:49:47.120724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:35:41.507 [2024-11-20 11:49:47.120739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.024 ms 00:35:41.507 [2024-11-20 11:49:47.120750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.507 [2024-11-20 11:49:47.147823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:41.507 [2024-11-20 11:49:47.147874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:35:41.507 [2024-11-20 11:49:47.147889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.033 ms 00:35:41.507 [2024-11-20 11:49:47.147900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.507 [2024-11-20 11:49:47.174529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:41.507 [2024-11-20 11:49:47.174604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:35:41.507 [2024-11-20 11:49:47.174620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.529 ms 00:35:41.507 [2024-11-20 11:49:47.174631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.507 [2024-11-20 11:49:47.174671] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:41.507 [2024-11-20 11:49:47.174693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 118016 / 261120 wr_cnt: 1 state: open 00:35:41.507 [2024-11-20 11:49:47.174707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:41.507 [2024-11-20 11:49:47.174719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:41.507 [2024-11-20 11:49:47.174746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:41.507 [2024-11-20 11:49:47.174757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:41.507 [2024-11-20 11:49:47.174768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:41.507 [2024-11-20 11:49:47.174779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:41.507 [2024-11-20 11:49:47.174791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:41.507 [2024-11-20 11:49:47.174802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:41.507 [2024-11-20 11:49:47.174813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:41.507 [2024-11-20 11:49:47.174824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:41.507 [2024-11-20 11:49:47.174835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:41.507 [2024-11-20 11:49:47.174846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:41.507 [2024-11-20 11:49:47.174857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:41.507 [2024-11-20 11:49:47.174868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:41.507 [2024-11-20 11:49:47.174879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:41.507 [2024-11-20 11:49:47.174890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:41.507 [2024-11-20 11:49:47.174901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:41.507 [2024-11-20 11:49:47.174912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:41.507 [2024-11-20 11:49:47.174923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:41.507 [2024-11-20 11:49:47.174933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:41.507 [2024-11-20 11:49:47.174959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:41.507 [2024-11-20 11:49:47.174970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:41.507 [2024-11-20 11:49:47.174981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:41.507 [2024-11-20 11:49:47.174992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:41.507 [2024-11-20 11:49:47.175002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:41.507 [2024-11-20 11:49:47.175015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:41.507 [2024-11-20 11:49:47.175026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:41.507 [2024-11-20 11:49:47.175036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:41.507 [2024-11-20 11:49:47.175047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:41.507 [2024-11-20 11:49:47.175058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:41.507 [2024-11-20 11:49:47.175069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:41.507 [2024-11-20 11:49:47.175079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:41.507 [2024-11-20 11:49:47.175091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:41.507 [2024-11-20 11:49:47.175103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:41.507 [2024-11-20 11:49:47.175114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:41.508 [2024-11-20 11:49:47.175949] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:41.508 [2024-11-20 11:49:47.175959] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5519a080-ca77-47db-88d7-988f5a6d6cec 00:35:41.508 [2024-11-20 11:49:47.175970] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 118016 00:35:41.508 [2024-11-20 11:49:47.175981] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 118976 00:35:41.508 [2024-11-20 11:49:47.175991] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 118016 00:35:41.508 [2024-11-20 11:49:47.176002] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0081 00:35:41.508 [2024-11-20 11:49:47.176012] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:41.508 [2024-11-20 11:49:47.176027] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:41.508 [2024-11-20 11:49:47.176048] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:41.508 [2024-11-20 11:49:47.176058] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:41.508 [2024-11-20 11:49:47.176068] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:41.508 [2024-11-20 11:49:47.176078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:41.508 [2024-11-20 11:49:47.176089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:41.508 [2024-11-20 11:49:47.176099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.408 ms 00:35:41.508 [2024-11-20 11:49:47.176110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.508 [2024-11-20 11:49:47.192239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:41.508 [2024-11-20 11:49:47.192288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:41.508 [2024-11-20 11:49:47.192303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.094 ms 00:35:41.508 [2024-11-20 11:49:47.192337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.508 [2024-11-20 11:49:47.192833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:41.508 [2024-11-20 11:49:47.192854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:41.508 [2024-11-20 11:49:47.192867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.472 ms 00:35:41.508 [2024-11-20 11:49:47.192879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.508 [2024-11-20 11:49:47.235845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:41.508 [2024-11-20 11:49:47.235905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:41.508 [2024-11-20 11:49:47.235926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:41.508 [2024-11-20 11:49:47.235937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.508 [2024-11-20 11:49:47.235999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:41.508 [2024-11-20 11:49:47.236012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:41.508 [2024-11-20 11:49:47.236023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:41.508 [2024-11-20 11:49:47.236034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.509 [2024-11-20 11:49:47.236104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:41.509 [2024-11-20 11:49:47.236122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:41.509 [2024-11-20 11:49:47.236143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:41.509 [2024-11-20 11:49:47.236160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.509 [2024-11-20 11:49:47.236181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:41.509 [2024-11-20 11:49:47.236193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:41.509 [2024-11-20 11:49:47.236204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:41.509 [2024-11-20 11:49:47.236214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.767 [2024-11-20 11:49:47.329526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:41.767 [2024-11-20 11:49:47.329604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:41.767 [2024-11-20 11:49:47.329642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:41.767 [2024-11-20 11:49:47.329653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.767 [2024-11-20 11:49:47.403952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:41.767 [2024-11-20 11:49:47.404018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:41.767 [2024-11-20 11:49:47.404035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:41.767 [2024-11-20 11:49:47.404045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.767 [2024-11-20 11:49:47.404143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:41.767 [2024-11-20 11:49:47.404158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:41.767 [2024-11-20 11:49:47.404169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:41.767 [2024-11-20 11:49:47.404180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.767 [2024-11-20 11:49:47.404248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:41.767 [2024-11-20 11:49:47.404279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:41.767 [2024-11-20 11:49:47.404291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:41.767 [2024-11-20 11:49:47.404308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.767 [2024-11-20 11:49:47.404449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:41.767 [2024-11-20 11:49:47.404488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:41.767 [2024-11-20 11:49:47.404501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:41.767 [2024-11-20 11:49:47.404511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.767 [2024-11-20 11:49:47.404621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:41.767 [2024-11-20 11:49:47.404640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:41.767 [2024-11-20 11:49:47.404654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:41.767 [2024-11-20 11:49:47.404666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.767 [2024-11-20 11:49:47.404727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:41.767 [2024-11-20 11:49:47.404755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:41.767 [2024-11-20 11:49:47.404767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:41.767 [2024-11-20 11:49:47.404778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.767 [2024-11-20 11:49:47.404835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:41.767 [2024-11-20 11:49:47.404852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:41.767 [2024-11-20 11:49:47.404864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:41.767 [2024-11-20 11:49:47.404875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.767 [2024-11-20 11:49:47.405043] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 553.629 ms, result 0 00:35:43.144 00:35:43.144 00:35:43.144 11:49:48 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:35:43.403 [2024-11-20 11:49:48.980956] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:35:43.403 [2024-11-20 11:49:48.981133] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80809 ] 00:35:43.403 [2024-11-20 11:49:49.152758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:43.664 [2024-11-20 11:49:49.271062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:43.932 [2024-11-20 11:49:49.597242] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:43.932 [2024-11-20 11:49:49.597396] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:44.193 [2024-11-20 11:49:49.759672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:44.193 [2024-11-20 11:49:49.759728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:35:44.193 [2024-11-20 11:49:49.759769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:44.193 [2024-11-20 11:49:49.759781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.193 [2024-11-20 11:49:49.759840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:44.193 [2024-11-20 11:49:49.759857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:44.193 [2024-11-20 11:49:49.759872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:35:44.193 [2024-11-20 11:49:49.759882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.194 [2024-11-20 11:49:49.759910] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:35:44.194 [2024-11-20 11:49:49.760816] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:35:44.194 [2024-11-20 11:49:49.760870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:44.194 [2024-11-20 11:49:49.760883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:44.194 [2024-11-20 11:49:49.760895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.966 ms 00:35:44.194 [2024-11-20 11:49:49.760905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.194 [2024-11-20 11:49:49.762947] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:35:44.194 [2024-11-20 11:49:49.778337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:44.194 [2024-11-20 11:49:49.778395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:35:44.194 [2024-11-20 11:49:49.778427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.397 ms 00:35:44.194 [2024-11-20 11:49:49.778439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.194 [2024-11-20 11:49:49.778510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:44.194 [2024-11-20 11:49:49.778527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:35:44.194 [2024-11-20 11:49:49.778553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:35:44.194 [2024-11-20 11:49:49.778565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.194 [2024-11-20 11:49:49.787466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:44.194 [2024-11-20 11:49:49.787519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:44.194 [2024-11-20 11:49:49.787552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.779 ms 00:35:44.194 [2024-11-20 11:49:49.787583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.194 [2024-11-20 11:49:49.787681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:44.194 [2024-11-20 11:49:49.787699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:44.194 [2024-11-20 11:49:49.787710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:35:44.194 [2024-11-20 11:49:49.787721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.194 [2024-11-20 11:49:49.787809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:44.194 [2024-11-20 11:49:49.787826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:35:44.194 [2024-11-20 11:49:49.787838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:35:44.194 [2024-11-20 11:49:49.787849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.194 [2024-11-20 11:49:49.787884] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:44.194 [2024-11-20 11:49:49.792599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:44.194 [2024-11-20 11:49:49.792633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:44.194 [2024-11-20 11:49:49.792663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.724 ms 00:35:44.194 [2024-11-20 11:49:49.792679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.194 [2024-11-20 11:49:49.792714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:44.194 [2024-11-20 11:49:49.792728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:35:44.194 [2024-11-20 11:49:49.792740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:35:44.194 [2024-11-20 11:49:49.792750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.194 [2024-11-20 11:49:49.792810] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:35:44.194 [2024-11-20 11:49:49.792872] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:35:44.194 [2024-11-20 11:49:49.792914] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:35:44.194 [2024-11-20 11:49:49.792953] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:35:44.194 [2024-11-20 11:49:49.793059] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:35:44.194 [2024-11-20 11:49:49.793074] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:35:44.194 [2024-11-20 11:49:49.793089] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:35:44.194 [2024-11-20 11:49:49.793103] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:35:44.194 [2024-11-20 11:49:49.793116] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:35:44.194 [2024-11-20 11:49:49.793127] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:35:44.194 [2024-11-20 11:49:49.793138] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:35:44.194 [2024-11-20 11:49:49.793148] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:35:44.194 [2024-11-20 11:49:49.793159] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:35:44.194 [2024-11-20 11:49:49.793175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:44.194 [2024-11-20 11:49:49.793185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:35:44.194 [2024-11-20 11:49:49.793196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:35:44.194 [2024-11-20 11:49:49.793206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.194 [2024-11-20 11:49:49.793306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:44.194 [2024-11-20 11:49:49.793353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:35:44.194 [2024-11-20 11:49:49.793365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:35:44.194 [2024-11-20 11:49:49.793376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.194 [2024-11-20 11:49:49.793494] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:35:44.194 [2024-11-20 11:49:49.793521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:35:44.194 [2024-11-20 11:49:49.793549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:44.194 [2024-11-20 11:49:49.793563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:44.194 [2024-11-20 11:49:49.793575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:35:44.194 [2024-11-20 11:49:49.793584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:35:44.194 [2024-11-20 11:49:49.793595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:35:44.194 [2024-11-20 11:49:49.793605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:35:44.194 [2024-11-20 11:49:49.793615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:35:44.194 [2024-11-20 11:49:49.793625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:44.194 [2024-11-20 11:49:49.793635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:35:44.194 [2024-11-20 11:49:49.793644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:35:44.194 [2024-11-20 11:49:49.793659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:44.194 [2024-11-20 11:49:49.793669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:35:44.194 [2024-11-20 11:49:49.793679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:35:44.194 [2024-11-20 11:49:49.793704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:44.194 [2024-11-20 11:49:49.793715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:35:44.194 [2024-11-20 11:49:49.793725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:35:44.194 [2024-11-20 11:49:49.793735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:44.194 [2024-11-20 11:49:49.793745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:35:44.194 [2024-11-20 11:49:49.793755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:35:44.194 [2024-11-20 11:49:49.793765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:44.194 [2024-11-20 11:49:49.793775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:35:44.194 [2024-11-20 11:49:49.793785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:35:44.194 [2024-11-20 11:49:49.793795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:44.194 [2024-11-20 11:49:49.793805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:35:44.194 [2024-11-20 11:49:49.793814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:35:44.194 [2024-11-20 11:49:49.793824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:44.194 [2024-11-20 11:49:49.793834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:35:44.194 [2024-11-20 11:49:49.793844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:35:44.194 [2024-11-20 11:49:49.793853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:44.195 [2024-11-20 11:49:49.793863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:35:44.195 [2024-11-20 11:49:49.793888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:35:44.195 [2024-11-20 11:49:49.793897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:44.195 [2024-11-20 11:49:49.793907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:35:44.195 [2024-11-20 11:49:49.793917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:35:44.195 [2024-11-20 11:49:49.793926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:44.195 [2024-11-20 11:49:49.793936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:35:44.195 [2024-11-20 11:49:49.793946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:35:44.195 [2024-11-20 11:49:49.793955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:44.195 [2024-11-20 11:49:49.793965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:35:44.195 [2024-11-20 11:49:49.793974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:35:44.195 [2024-11-20 11:49:49.793983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:44.195 [2024-11-20 11:49:49.793993] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:35:44.195 [2024-11-20 11:49:49.794003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:35:44.195 [2024-11-20 11:49:49.794014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:44.195 [2024-11-20 11:49:49.794024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:44.195 [2024-11-20 11:49:49.794035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:35:44.195 [2024-11-20 11:49:49.794046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:35:44.195 [2024-11-20 11:49:49.794056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:35:44.195 [2024-11-20 11:49:49.794065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:35:44.195 [2024-11-20 11:49:49.794074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:35:44.195 [2024-11-20 11:49:49.794084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:35:44.195 [2024-11-20 11:49:49.794095] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:35:44.195 [2024-11-20 11:49:49.794108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:44.195 [2024-11-20 11:49:49.794120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:35:44.195 [2024-11-20 11:49:49.794131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:35:44.195 [2024-11-20 11:49:49.794141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:35:44.195 [2024-11-20 11:49:49.794151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:35:44.195 [2024-11-20 11:49:49.794170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:35:44.195 [2024-11-20 11:49:49.794180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:35:44.195 [2024-11-20 11:49:49.794190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:35:44.195 [2024-11-20 11:49:49.794201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:35:44.195 [2024-11-20 11:49:49.794210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:35:44.195 [2024-11-20 11:49:49.794221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:35:44.195 [2024-11-20 11:49:49.794231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:35:44.195 [2024-11-20 11:49:49.794242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:35:44.195 [2024-11-20 11:49:49.794252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:35:44.195 [2024-11-20 11:49:49.794262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:35:44.195 [2024-11-20 11:49:49.794272] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:35:44.195 [2024-11-20 11:49:49.794289] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:44.195 [2024-11-20 11:49:49.794301] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:44.195 [2024-11-20 11:49:49.794323] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:35:44.195 [2024-11-20 11:49:49.794349] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:35:44.195 [2024-11-20 11:49:49.794360] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:35:44.195 [2024-11-20 11:49:49.794373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:44.195 [2024-11-20 11:49:49.794384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:35:44.195 [2024-11-20 11:49:49.794395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.945 ms 00:35:44.195 [2024-11-20 11:49:49.794406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.195 [2024-11-20 11:49:49.831513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:44.195 [2024-11-20 11:49:49.831595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:44.195 [2024-11-20 11:49:49.831632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.043 ms 00:35:44.195 [2024-11-20 11:49:49.831643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.195 [2024-11-20 11:49:49.831752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:44.195 [2024-11-20 11:49:49.831766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:35:44.195 [2024-11-20 11:49:49.831778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:35:44.195 [2024-11-20 11:49:49.831788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.195 [2024-11-20 11:49:49.879606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:44.195 [2024-11-20 11:49:49.879672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:44.195 [2024-11-20 11:49:49.879705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.708 ms 00:35:44.195 [2024-11-20 11:49:49.879717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.195 [2024-11-20 11:49:49.879775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:44.195 [2024-11-20 11:49:49.879790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:44.195 [2024-11-20 11:49:49.879802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:44.196 [2024-11-20 11:49:49.879818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.196 [2024-11-20 11:49:49.880501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:44.196 [2024-11-20 11:49:49.880577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:44.196 [2024-11-20 11:49:49.880609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:35:44.196 [2024-11-20 11:49:49.880620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.196 [2024-11-20 11:49:49.880820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:44.196 [2024-11-20 11:49:49.880838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:44.196 [2024-11-20 11:49:49.880849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:35:44.196 [2024-11-20 11:49:49.880868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.196 [2024-11-20 11:49:49.898364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:44.196 [2024-11-20 11:49:49.898421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:44.196 [2024-11-20 11:49:49.898457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.471 ms 00:35:44.196 [2024-11-20 11:49:49.898467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.196 [2024-11-20 11:49:49.913455] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:35:44.196 [2024-11-20 11:49:49.913519] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:35:44.196 [2024-11-20 11:49:49.913564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:44.196 [2024-11-20 11:49:49.913577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:35:44.196 [2024-11-20 11:49:49.913589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.948 ms 00:35:44.196 [2024-11-20 11:49:49.913599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.196 [2024-11-20 11:49:49.938971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:44.196 [2024-11-20 11:49:49.939035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:35:44.196 [2024-11-20 11:49:49.939065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.329 ms 00:35:44.196 [2024-11-20 11:49:49.939077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.196 [2024-11-20 11:49:49.952615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:44.196 [2024-11-20 11:49:49.952693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:35:44.196 [2024-11-20 11:49:49.952723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.495 ms 00:35:44.196 [2024-11-20 11:49:49.952733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.455 [2024-11-20 11:49:49.966051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:44.455 [2024-11-20 11:49:49.966106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:35:44.455 [2024-11-20 11:49:49.966137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.272 ms 00:35:44.455 [2024-11-20 11:49:49.966147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.455 [2024-11-20 11:49:49.967010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:44.455 [2024-11-20 11:49:49.967057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:35:44.455 [2024-11-20 11:49:49.967086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.757 ms 00:35:44.455 [2024-11-20 11:49:49.967102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.455 [2024-11-20 11:49:50.035425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:44.455 [2024-11-20 11:49:50.035513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:35:44.455 [2024-11-20 11:49:50.035565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.298 ms 00:35:44.455 [2024-11-20 11:49:50.035577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.455 [2024-11-20 11:49:50.047074] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:35:44.455 [2024-11-20 11:49:50.049718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:44.455 [2024-11-20 11:49:50.049779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:35:44.455 [2024-11-20 11:49:50.049810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.073 ms 00:35:44.455 [2024-11-20 11:49:50.049822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.455 [2024-11-20 11:49:50.049919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:44.455 [2024-11-20 11:49:50.049938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:35:44.455 [2024-11-20 11:49:50.049951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:35:44.455 [2024-11-20 11:49:50.049965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.455 [2024-11-20 11:49:50.052004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:44.455 [2024-11-20 11:49:50.052055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:35:44.455 [2024-11-20 11:49:50.052086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.950 ms 00:35:44.455 [2024-11-20 11:49:50.052097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.455 [2024-11-20 11:49:50.052136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:44.455 [2024-11-20 11:49:50.052152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:35:44.455 [2024-11-20 11:49:50.052174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:35:44.455 [2024-11-20 11:49:50.052186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.455 [2024-11-20 11:49:50.052232] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:35:44.455 [2024-11-20 11:49:50.052252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:44.455 [2024-11-20 11:49:50.052263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:35:44.455 [2024-11-20 11:49:50.052291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:35:44.455 [2024-11-20 11:49:50.052302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.455 [2024-11-20 11:49:50.084301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:44.455 [2024-11-20 11:49:50.084364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:35:44.455 [2024-11-20 11:49:50.084395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.961 ms 00:35:44.455 [2024-11-20 11:49:50.084413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.455 [2024-11-20 11:49:50.084498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:44.455 [2024-11-20 11:49:50.084515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:35:44.455 [2024-11-20 11:49:50.084527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:35:44.455 [2024-11-20 11:49:50.084566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:44.455 [2024-11-20 11:49:50.088584] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 327.217 ms, result 0 00:35:45.864  [2024-11-20T11:49:52.579Z] Copying: 21/1024 [MB] (21 MBps) [2024-11-20T11:49:53.515Z] Copying: 47/1024 [MB] (25 MBps) [2024-11-20T11:49:54.451Z] Copying: 72/1024 [MB] (25 MBps) [2024-11-20T11:49:55.389Z] Copying: 97/1024 [MB] (24 MBps) [2024-11-20T11:49:56.325Z] Copying: 121/1024 [MB] (24 MBps) [2024-11-20T11:49:57.702Z] Copying: 146/1024 [MB] (24 MBps) [2024-11-20T11:49:58.640Z] Copying: 171/1024 [MB] (25 MBps) [2024-11-20T11:49:59.576Z] Copying: 195/1024 [MB] (24 MBps) [2024-11-20T11:50:00.511Z] Copying: 220/1024 [MB] (24 MBps) [2024-11-20T11:50:01.446Z] Copying: 244/1024 [MB] (23 MBps) [2024-11-20T11:50:02.382Z] Copying: 268/1024 [MB] (24 MBps) [2024-11-20T11:50:03.320Z] Copying: 292/1024 [MB] (24 MBps) [2024-11-20T11:50:04.697Z] Copying: 317/1024 [MB] (24 MBps) [2024-11-20T11:50:05.634Z] Copying: 341/1024 [MB] (24 MBps) [2024-11-20T11:50:06.571Z] Copying: 365/1024 [MB] (23 MBps) [2024-11-20T11:50:07.508Z] Copying: 390/1024 [MB] (24 MBps) [2024-11-20T11:50:08.456Z] Copying: 414/1024 [MB] (24 MBps) [2024-11-20T11:50:09.413Z] Copying: 438/1024 [MB] (24 MBps) [2024-11-20T11:50:10.347Z] Copying: 461/1024 [MB] (23 MBps) [2024-11-20T11:50:11.724Z] Copying: 485/1024 [MB] (23 MBps) [2024-11-20T11:50:12.316Z] Copying: 508/1024 [MB] (23 MBps) [2024-11-20T11:50:13.692Z] Copying: 531/1024 [MB] (23 MBps) [2024-11-20T11:50:14.628Z] Copying: 555/1024 [MB] (23 MBps) [2024-11-20T11:50:15.564Z] Copying: 578/1024 [MB] (23 MBps) [2024-11-20T11:50:16.502Z] Copying: 601/1024 [MB] (23 MBps) [2024-11-20T11:50:17.438Z] Copying: 625/1024 [MB] (23 MBps) [2024-11-20T11:50:18.374Z] Copying: 648/1024 [MB] (23 MBps) [2024-11-20T11:50:19.308Z] Copying: 672/1024 [MB] (23 MBps) [2024-11-20T11:50:20.685Z] Copying: 695/1024 [MB] (23 MBps) [2024-11-20T11:50:21.622Z] Copying: 719/1024 [MB] (23 MBps) [2024-11-20T11:50:22.558Z] Copying: 743/1024 [MB] (24 MBps) [2024-11-20T11:50:23.493Z] Copying: 767/1024 [MB] (23 MBps) [2024-11-20T11:50:24.431Z] Copying: 791/1024 [MB] (24 MBps) [2024-11-20T11:50:25.398Z] Copying: 815/1024 [MB] (24 MBps) [2024-11-20T11:50:26.335Z] Copying: 839/1024 [MB] (24 MBps) [2024-11-20T11:50:27.713Z] Copying: 863/1024 [MB] (23 MBps) [2024-11-20T11:50:28.650Z] Copying: 888/1024 [MB] (24 MBps) [2024-11-20T11:50:29.586Z] Copying: 912/1024 [MB] (24 MBps) [2024-11-20T11:50:30.523Z] Copying: 936/1024 [MB] (24 MBps) [2024-11-20T11:50:31.460Z] Copying: 960/1024 [MB] (24 MBps) [2024-11-20T11:50:32.393Z] Copying: 984/1024 [MB] (23 MBps) [2024-11-20T11:50:32.960Z] Copying: 1008/1024 [MB] (23 MBps) [2024-11-20T11:50:33.218Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-20 11:50:33.102342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:27.452 [2024-11-20 11:50:33.102421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:36:27.452 [2024-11-20 11:50:33.102443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:36:27.452 [2024-11-20 11:50:33.102456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:27.452 [2024-11-20 11:50:33.102497] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:36:27.452 [2024-11-20 11:50:33.106573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:27.452 [2024-11-20 11:50:33.106605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:36:27.452 [2024-11-20 11:50:33.106620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.052 ms 00:36:27.452 [2024-11-20 11:50:33.106632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:27.452 [2024-11-20 11:50:33.106892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:27.452 [2024-11-20 11:50:33.106919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:36:27.452 [2024-11-20 11:50:33.106933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:36:27.452 [2024-11-20 11:50:33.106944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:27.452 [2024-11-20 11:50:33.111871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:27.452 [2024-11-20 11:50:33.111912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:36:27.452 [2024-11-20 11:50:33.111928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.899 ms 00:36:27.452 [2024-11-20 11:50:33.111950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:27.452 [2024-11-20 11:50:33.119903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:27.452 [2024-11-20 11:50:33.119968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:36:27.452 [2024-11-20 11:50:33.119983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.911 ms 00:36:27.452 [2024-11-20 11:50:33.119995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:27.452 [2024-11-20 11:50:33.150345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:27.452 [2024-11-20 11:50:33.150400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:36:27.452 [2024-11-20 11:50:33.150431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.267 ms 00:36:27.452 [2024-11-20 11:50:33.150441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:27.452 [2024-11-20 11:50:33.166809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:27.452 [2024-11-20 11:50:33.166868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:36:27.452 [2024-11-20 11:50:33.166883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.326 ms 00:36:27.452 [2024-11-20 11:50:33.166894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:27.712 [2024-11-20 11:50:33.296135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:27.712 [2024-11-20 11:50:33.296210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:36:27.712 [2024-11-20 11:50:33.296243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 129.197 ms 00:36:27.712 [2024-11-20 11:50:33.296255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:27.712 [2024-11-20 11:50:33.324090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:27.712 [2024-11-20 11:50:33.324141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:36:27.712 [2024-11-20 11:50:33.324156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.815 ms 00:36:27.712 [2024-11-20 11:50:33.324166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:27.712 [2024-11-20 11:50:33.351135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:27.712 [2024-11-20 11:50:33.351185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:36:27.712 [2024-11-20 11:50:33.351211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.932 ms 00:36:27.712 [2024-11-20 11:50:33.351221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:27.712 [2024-11-20 11:50:33.377906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:27.712 [2024-11-20 11:50:33.377956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:36:27.712 [2024-11-20 11:50:33.377969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.647 ms 00:36:27.712 [2024-11-20 11:50:33.377979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:27.712 [2024-11-20 11:50:33.404381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:27.712 [2024-11-20 11:50:33.404432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:36:27.712 [2024-11-20 11:50:33.404446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.325 ms 00:36:27.712 [2024-11-20 11:50:33.404456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:27.712 [2024-11-20 11:50:33.404493] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:36:27.712 [2024-11-20 11:50:33.404514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:36:27.712 [2024-11-20 11:50:33.404527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:36:27.712 [2024-11-20 11:50:33.404550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:36:27.712 [2024-11-20 11:50:33.404562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:27.712 [2024-11-20 11:50:33.404589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:27.712 [2024-11-20 11:50:33.404600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:27.712 [2024-11-20 11:50:33.404610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:27.712 [2024-11-20 11:50:33.404621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:27.712 [2024-11-20 11:50:33.404631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:27.712 [2024-11-20 11:50:33.404642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:27.712 [2024-11-20 11:50:33.404652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:27.712 [2024-11-20 11:50:33.404663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.404673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.404683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.404693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.404703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.404713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.404723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.404733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.404743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.404753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.404763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.404773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.404783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.404793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.404803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.404815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.404826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.404836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.404847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.404857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.404868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.404879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.404889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.404900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.404911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.404921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.404932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.404956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.404966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.404976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.404987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.404997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.405007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.405017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.405026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.405036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.405047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.405057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.405066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.405076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.405086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.405096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.405106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.405116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.405128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.405138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.405148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.405158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.405168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.405179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.405189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.405199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.405211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.405222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.405233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.405243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.405253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.405264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.405274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.405285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.405295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.405306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.405366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.405378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.405390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.405401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.405412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.405423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.405434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:36:27.713 [2024-11-20 11:50:33.405445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:36:27.714 [2024-11-20 11:50:33.405456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:36:27.714 [2024-11-20 11:50:33.405467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:36:27.714 [2024-11-20 11:50:33.405478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:36:27.714 [2024-11-20 11:50:33.405490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:36:27.714 [2024-11-20 11:50:33.405501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:36:27.714 [2024-11-20 11:50:33.405512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:36:27.714 [2024-11-20 11:50:33.405523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:36:27.714 [2024-11-20 11:50:33.405534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:36:27.714 [2024-11-20 11:50:33.405545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:36:27.714 [2024-11-20 11:50:33.405556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:36:27.714 [2024-11-20 11:50:33.405577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:36:27.714 [2024-11-20 11:50:33.405589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:36:27.714 [2024-11-20 11:50:33.405601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:36:27.714 [2024-11-20 11:50:33.405613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:36:27.714 [2024-11-20 11:50:33.405630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:36:27.714 [2024-11-20 11:50:33.405642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:36:27.714 [2024-11-20 11:50:33.405654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:36:27.714 [2024-11-20 11:50:33.405679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:36:27.714 [2024-11-20 11:50:33.405690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:36:27.714 [2024-11-20 11:50:33.405725] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:36:27.714 [2024-11-20 11:50:33.405735] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5519a080-ca77-47db-88d7-988f5a6d6cec 00:36:27.714 [2024-11-20 11:50:33.405746] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:36:27.714 [2024-11-20 11:50:33.405756] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 14016 00:36:27.714 [2024-11-20 11:50:33.405776] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 13056 00:36:27.714 [2024-11-20 11:50:33.405787] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0735 00:36:27.714 [2024-11-20 11:50:33.405796] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:36:27.714 [2024-11-20 11:50:33.405813] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:36:27.714 [2024-11-20 11:50:33.405823] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:36:27.714 [2024-11-20 11:50:33.405843] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:36:27.714 [2024-11-20 11:50:33.405853] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:36:27.714 [2024-11-20 11:50:33.405863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:27.714 [2024-11-20 11:50:33.405873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:36:27.714 [2024-11-20 11:50:33.405883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.371 ms 00:36:27.714 [2024-11-20 11:50:33.405893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:27.714 [2024-11-20 11:50:33.421394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:27.714 [2024-11-20 11:50:33.421442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:36:27.714 [2024-11-20 11:50:33.421457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.450 ms 00:36:27.714 [2024-11-20 11:50:33.421475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:27.714 [2024-11-20 11:50:33.421950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:27.714 [2024-11-20 11:50:33.421984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:36:27.714 [2024-11-20 11:50:33.421997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.452 ms 00:36:27.714 [2024-11-20 11:50:33.422007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:27.714 [2024-11-20 11:50:33.461686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:27.714 [2024-11-20 11:50:33.461757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:27.714 [2024-11-20 11:50:33.461777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:27.714 [2024-11-20 11:50:33.461787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:27.714 [2024-11-20 11:50:33.461854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:27.714 [2024-11-20 11:50:33.461868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:27.714 [2024-11-20 11:50:33.461879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:27.714 [2024-11-20 11:50:33.461889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:27.714 [2024-11-20 11:50:33.461972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:27.714 [2024-11-20 11:50:33.461990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:27.714 [2024-11-20 11:50:33.462001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:27.714 [2024-11-20 11:50:33.462017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:27.714 [2024-11-20 11:50:33.462037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:27.714 [2024-11-20 11:50:33.462048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:27.714 [2024-11-20 11:50:33.462058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:27.714 [2024-11-20 11:50:33.462068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:27.974 [2024-11-20 11:50:33.554648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:27.974 [2024-11-20 11:50:33.554748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:27.974 [2024-11-20 11:50:33.554770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:27.974 [2024-11-20 11:50:33.554781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:27.974 [2024-11-20 11:50:33.632787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:27.974 [2024-11-20 11:50:33.632840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:27.974 [2024-11-20 11:50:33.632856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:27.974 [2024-11-20 11:50:33.632866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:27.974 [2024-11-20 11:50:33.632963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:27.974 [2024-11-20 11:50:33.632979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:27.974 [2024-11-20 11:50:33.632990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:27.974 [2024-11-20 11:50:33.633000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:27.974 [2024-11-20 11:50:33.633064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:27.974 [2024-11-20 11:50:33.633094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:27.974 [2024-11-20 11:50:33.633121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:27.974 [2024-11-20 11:50:33.633132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:27.974 [2024-11-20 11:50:33.633260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:27.974 [2024-11-20 11:50:33.633278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:27.974 [2024-11-20 11:50:33.633290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:27.974 [2024-11-20 11:50:33.633301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:27.974 [2024-11-20 11:50:33.633379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:27.974 [2024-11-20 11:50:33.633404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:36:27.974 [2024-11-20 11:50:33.633417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:27.974 [2024-11-20 11:50:33.633428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:27.974 [2024-11-20 11:50:33.633483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:27.974 [2024-11-20 11:50:33.633498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:27.974 [2024-11-20 11:50:33.633510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:27.974 [2024-11-20 11:50:33.633521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:27.974 [2024-11-20 11:50:33.633664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:27.974 [2024-11-20 11:50:33.633692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:27.974 [2024-11-20 11:50:33.633705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:27.974 [2024-11-20 11:50:33.633716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:27.974 [2024-11-20 11:50:33.633860] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 531.484 ms, result 0 00:36:28.910 00:36:28.910 00:36:28.910 11:50:34 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:36:30.815 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:36:30.815 11:50:36 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:36:30.815 11:50:36 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:36:30.815 11:50:36 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:36:30.815 11:50:36 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:36:30.815 11:50:36 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:36:30.815 11:50:36 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79191 00:36:30.816 11:50:36 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79191 ']' 00:36:30.816 11:50:36 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79191 00:36:30.816 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79191) - No such process 00:36:30.816 Process with pid 79191 is not found 00:36:30.816 11:50:36 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 79191 is not found' 00:36:30.816 11:50:36 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:36:30.816 Remove shared memory files 00:36:30.816 11:50:36 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:30.816 11:50:36 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:36:30.816 11:50:36 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:36:30.816 11:50:36 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:36:30.816 11:50:36 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:31.075 11:50:36 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:36:31.075 00:36:31.075 real 3m27.849s 00:36:31.075 user 3m13.300s 00:36:31.075 sys 0m16.672s 00:36:31.075 11:50:36 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:31.075 11:50:36 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:36:31.075 ************************************ 00:36:31.075 END TEST ftl_restore 00:36:31.075 ************************************ 00:36:31.075 11:50:36 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:36:31.075 11:50:36 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:36:31.075 11:50:36 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:31.075 11:50:36 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:31.075 ************************************ 00:36:31.075 START TEST ftl_dirty_shutdown 00:36:31.075 ************************************ 00:36:31.075 11:50:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:36:31.075 * Looking for test storage... 00:36:31.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:36:31.075 11:50:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:31.075 11:50:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:36:31.075 11:50:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:31.075 11:50:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:31.075 11:50:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:31.075 11:50:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:31.075 11:50:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:31.075 11:50:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:36:31.075 11:50:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:36:31.075 11:50:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:36:31.075 11:50:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:36:31.075 11:50:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:36:31.075 11:50:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:36:31.075 11:50:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:36:31.075 11:50:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:31.075 11:50:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:36:31.075 11:50:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:36:31.075 11:50:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:31.075 11:50:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:31.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:31.335 --rc genhtml_branch_coverage=1 00:36:31.335 --rc genhtml_function_coverage=1 00:36:31.335 --rc genhtml_legend=1 00:36:31.335 --rc geninfo_all_blocks=1 00:36:31.335 --rc geninfo_unexecuted_blocks=1 00:36:31.335 00:36:31.335 ' 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:31.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:31.335 --rc genhtml_branch_coverage=1 00:36:31.335 --rc genhtml_function_coverage=1 00:36:31.335 --rc genhtml_legend=1 00:36:31.335 --rc geninfo_all_blocks=1 00:36:31.335 --rc geninfo_unexecuted_blocks=1 00:36:31.335 00:36:31.335 ' 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:31.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:31.335 --rc genhtml_branch_coverage=1 00:36:31.335 --rc genhtml_function_coverage=1 00:36:31.335 --rc genhtml_legend=1 00:36:31.335 --rc geninfo_all_blocks=1 00:36:31.335 --rc geninfo_unexecuted_blocks=1 00:36:31.335 00:36:31.335 ' 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:31.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:31.335 --rc genhtml_branch_coverage=1 00:36:31.335 --rc genhtml_function_coverage=1 00:36:31.335 --rc genhtml_legend=1 00:36:31.335 --rc geninfo_all_blocks=1 00:36:31.335 --rc geninfo_unexecuted_blocks=1 00:36:31.335 00:36:31.335 ' 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:36:31.335 11:50:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:36:31.336 11:50:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:36:31.336 11:50:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:36:31.336 11:50:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:36:31.336 11:50:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:36:31.336 11:50:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:36:31.336 11:50:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:36:31.336 11:50:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:36:31.336 11:50:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:36:31.336 11:50:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:36:31.336 11:50:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81348 00:36:31.336 11:50:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81348 00:36:31.336 11:50:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:36:31.336 11:50:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81348 ']' 00:36:31.336 11:50:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:31.336 11:50:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:31.336 11:50:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:31.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:31.336 11:50:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:31.336 11:50:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:36:31.336 [2024-11-20 11:50:37.024932] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:36:31.336 [2024-11-20 11:50:37.025175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81348 ] 00:36:31.595 [2024-11-20 11:50:37.221843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:31.853 [2024-11-20 11:50:37.381490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:32.789 11:50:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:32.789 11:50:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:36:32.789 11:50:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:36:32.789 11:50:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:36:32.789 11:50:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:36:32.789 11:50:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:36:32.789 11:50:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:36:32.789 11:50:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:36:33.048 11:50:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:36:33.048 11:50:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:36:33.048 11:50:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:36:33.048 11:50:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:36:33.048 11:50:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:36:33.048 11:50:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:36:33.048 11:50:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:36:33.048 11:50:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:36:33.307 11:50:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:36:33.307 { 00:36:33.307 "name": "nvme0n1", 00:36:33.307 "aliases": [ 00:36:33.307 "f5e68181-8af7-4880-9b16-568523753868" 00:36:33.307 ], 00:36:33.307 "product_name": "NVMe disk", 00:36:33.307 "block_size": 4096, 00:36:33.307 "num_blocks": 1310720, 00:36:33.307 "uuid": "f5e68181-8af7-4880-9b16-568523753868", 00:36:33.307 "numa_id": -1, 00:36:33.307 "assigned_rate_limits": { 00:36:33.307 "rw_ios_per_sec": 0, 00:36:33.307 "rw_mbytes_per_sec": 0, 00:36:33.307 "r_mbytes_per_sec": 0, 00:36:33.307 "w_mbytes_per_sec": 0 00:36:33.307 }, 00:36:33.307 "claimed": true, 00:36:33.307 "claim_type": "read_many_write_one", 00:36:33.307 "zoned": false, 00:36:33.307 "supported_io_types": { 00:36:33.307 "read": true, 00:36:33.307 "write": true, 00:36:33.307 "unmap": true, 00:36:33.307 "flush": true, 00:36:33.307 "reset": true, 00:36:33.307 "nvme_admin": true, 00:36:33.307 "nvme_io": true, 00:36:33.307 "nvme_io_md": false, 00:36:33.307 "write_zeroes": true, 00:36:33.307 "zcopy": false, 00:36:33.307 "get_zone_info": false, 00:36:33.307 "zone_management": false, 00:36:33.307 "zone_append": false, 00:36:33.307 "compare": true, 00:36:33.307 "compare_and_write": false, 00:36:33.307 "abort": true, 00:36:33.307 "seek_hole": false, 00:36:33.307 "seek_data": false, 00:36:33.307 "copy": true, 00:36:33.307 "nvme_iov_md": false 00:36:33.307 }, 00:36:33.307 "driver_specific": { 00:36:33.307 "nvme": [ 00:36:33.307 { 00:36:33.307 "pci_address": "0000:00:11.0", 00:36:33.307 "trid": { 00:36:33.307 "trtype": "PCIe", 00:36:33.307 "traddr": "0000:00:11.0" 00:36:33.307 }, 00:36:33.307 "ctrlr_data": { 00:36:33.307 "cntlid": 0, 00:36:33.307 "vendor_id": "0x1b36", 00:36:33.307 "model_number": "QEMU NVMe Ctrl", 00:36:33.307 "serial_number": "12341", 00:36:33.307 "firmware_revision": "8.0.0", 00:36:33.307 "subnqn": "nqn.2019-08.org.qemu:12341", 00:36:33.307 "oacs": { 00:36:33.307 "security": 0, 00:36:33.307 "format": 1, 00:36:33.307 "firmware": 0, 00:36:33.307 "ns_manage": 1 00:36:33.307 }, 00:36:33.307 "multi_ctrlr": false, 00:36:33.307 "ana_reporting": false 00:36:33.307 }, 00:36:33.307 "vs": { 00:36:33.307 "nvme_version": "1.4" 00:36:33.307 }, 00:36:33.307 "ns_data": { 00:36:33.307 "id": 1, 00:36:33.307 "can_share": false 00:36:33.307 } 00:36:33.307 } 00:36:33.307 ], 00:36:33.307 "mp_policy": "active_passive" 00:36:33.307 } 00:36:33.307 } 00:36:33.307 ]' 00:36:33.307 11:50:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:36:33.307 11:50:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:36:33.307 11:50:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:36:33.307 11:50:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:36:33.307 11:50:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:36:33.307 11:50:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:36:33.307 11:50:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:36:33.307 11:50:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:36:33.307 11:50:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:36:33.307 11:50:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:36:33.307 11:50:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:36:33.565 11:50:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=64138c4b-6e57-4458-8136-3fabf739017a 00:36:33.565 11:50:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:36:33.565 11:50:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 64138c4b-6e57-4458-8136-3fabf739017a 00:36:33.823 11:50:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:36:34.082 11:50:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=fab90042-d7b6-4e58-87d2-c747c9130f27 00:36:34.082 11:50:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u fab90042-d7b6-4e58-87d2-c747c9130f27 00:36:34.341 11:50:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=781d5f3b-61be-4a9f-9c12-b7a15e6990e3 00:36:34.341 11:50:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:36:34.341 11:50:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 781d5f3b-61be-4a9f-9c12-b7a15e6990e3 00:36:34.341 11:50:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:36:34.341 11:50:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:36:34.341 11:50:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=781d5f3b-61be-4a9f-9c12-b7a15e6990e3 00:36:34.341 11:50:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:36:34.341 11:50:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 781d5f3b-61be-4a9f-9c12-b7a15e6990e3 00:36:34.341 11:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=781d5f3b-61be-4a9f-9c12-b7a15e6990e3 00:36:34.341 11:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:36:34.341 11:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:36:34.341 11:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:36:34.341 11:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 781d5f3b-61be-4a9f-9c12-b7a15e6990e3 00:36:34.600 11:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:36:34.600 { 00:36:34.600 "name": "781d5f3b-61be-4a9f-9c12-b7a15e6990e3", 00:36:34.600 "aliases": [ 00:36:34.600 "lvs/nvme0n1p0" 00:36:34.600 ], 00:36:34.600 "product_name": "Logical Volume", 00:36:34.600 "block_size": 4096, 00:36:34.600 "num_blocks": 26476544, 00:36:34.600 "uuid": "781d5f3b-61be-4a9f-9c12-b7a15e6990e3", 00:36:34.600 "assigned_rate_limits": { 00:36:34.600 "rw_ios_per_sec": 0, 00:36:34.600 "rw_mbytes_per_sec": 0, 00:36:34.600 "r_mbytes_per_sec": 0, 00:36:34.600 "w_mbytes_per_sec": 0 00:36:34.600 }, 00:36:34.600 "claimed": false, 00:36:34.600 "zoned": false, 00:36:34.600 "supported_io_types": { 00:36:34.600 "read": true, 00:36:34.600 "write": true, 00:36:34.600 "unmap": true, 00:36:34.600 "flush": false, 00:36:34.600 "reset": true, 00:36:34.600 "nvme_admin": false, 00:36:34.600 "nvme_io": false, 00:36:34.600 "nvme_io_md": false, 00:36:34.600 "write_zeroes": true, 00:36:34.600 "zcopy": false, 00:36:34.600 "get_zone_info": false, 00:36:34.600 "zone_management": false, 00:36:34.600 "zone_append": false, 00:36:34.600 "compare": false, 00:36:34.600 "compare_and_write": false, 00:36:34.600 "abort": false, 00:36:34.600 "seek_hole": true, 00:36:34.600 "seek_data": true, 00:36:34.600 "copy": false, 00:36:34.600 "nvme_iov_md": false 00:36:34.600 }, 00:36:34.600 "driver_specific": { 00:36:34.600 "lvol": { 00:36:34.600 "lvol_store_uuid": "fab90042-d7b6-4e58-87d2-c747c9130f27", 00:36:34.600 "base_bdev": "nvme0n1", 00:36:34.600 "thin_provision": true, 00:36:34.600 "num_allocated_clusters": 0, 00:36:34.600 "snapshot": false, 00:36:34.600 "clone": false, 00:36:34.600 "esnap_clone": false 00:36:34.600 } 00:36:34.600 } 00:36:34.600 } 00:36:34.600 ]' 00:36:34.600 11:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:36:34.859 11:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:36:34.859 11:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:36:34.859 11:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:36:34.859 11:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:36:34.859 11:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:36:34.859 11:50:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:36:34.859 11:50:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:36:34.859 11:50:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:36:35.117 11:50:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:36:35.117 11:50:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:36:35.117 11:50:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 781d5f3b-61be-4a9f-9c12-b7a15e6990e3 00:36:35.118 11:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=781d5f3b-61be-4a9f-9c12-b7a15e6990e3 00:36:35.118 11:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:36:35.118 11:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:36:35.118 11:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:36:35.118 11:50:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 781d5f3b-61be-4a9f-9c12-b7a15e6990e3 00:36:35.377 11:50:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:36:35.377 { 00:36:35.377 "name": "781d5f3b-61be-4a9f-9c12-b7a15e6990e3", 00:36:35.377 "aliases": [ 00:36:35.377 "lvs/nvme0n1p0" 00:36:35.377 ], 00:36:35.377 "product_name": "Logical Volume", 00:36:35.377 "block_size": 4096, 00:36:35.377 "num_blocks": 26476544, 00:36:35.377 "uuid": "781d5f3b-61be-4a9f-9c12-b7a15e6990e3", 00:36:35.377 "assigned_rate_limits": { 00:36:35.377 "rw_ios_per_sec": 0, 00:36:35.377 "rw_mbytes_per_sec": 0, 00:36:35.377 "r_mbytes_per_sec": 0, 00:36:35.377 "w_mbytes_per_sec": 0 00:36:35.377 }, 00:36:35.377 "claimed": false, 00:36:35.377 "zoned": false, 00:36:35.377 "supported_io_types": { 00:36:35.377 "read": true, 00:36:35.377 "write": true, 00:36:35.377 "unmap": true, 00:36:35.377 "flush": false, 00:36:35.377 "reset": true, 00:36:35.377 "nvme_admin": false, 00:36:35.377 "nvme_io": false, 00:36:35.377 "nvme_io_md": false, 00:36:35.377 "write_zeroes": true, 00:36:35.377 "zcopy": false, 00:36:35.377 "get_zone_info": false, 00:36:35.377 "zone_management": false, 00:36:35.377 "zone_append": false, 00:36:35.377 "compare": false, 00:36:35.377 "compare_and_write": false, 00:36:35.377 "abort": false, 00:36:35.377 "seek_hole": true, 00:36:35.377 "seek_data": true, 00:36:35.377 "copy": false, 00:36:35.377 "nvme_iov_md": false 00:36:35.377 }, 00:36:35.377 "driver_specific": { 00:36:35.377 "lvol": { 00:36:35.377 "lvol_store_uuid": "fab90042-d7b6-4e58-87d2-c747c9130f27", 00:36:35.377 "base_bdev": "nvme0n1", 00:36:35.377 "thin_provision": true, 00:36:35.377 "num_allocated_clusters": 0, 00:36:35.377 "snapshot": false, 00:36:35.377 "clone": false, 00:36:35.377 "esnap_clone": false 00:36:35.377 } 00:36:35.377 } 00:36:35.377 } 00:36:35.377 ]' 00:36:35.377 11:50:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:36:35.377 11:50:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:36:35.377 11:50:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:36:35.636 11:50:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:36:35.636 11:50:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:36:35.636 11:50:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:36:35.636 11:50:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:36:35.636 11:50:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:36:35.894 11:50:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:36:35.894 11:50:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 781d5f3b-61be-4a9f-9c12-b7a15e6990e3 00:36:35.894 11:50:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=781d5f3b-61be-4a9f-9c12-b7a15e6990e3 00:36:35.894 11:50:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:36:35.894 11:50:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:36:35.894 11:50:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:36:35.894 11:50:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 781d5f3b-61be-4a9f-9c12-b7a15e6990e3 00:36:36.152 11:50:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:36:36.152 { 00:36:36.152 "name": "781d5f3b-61be-4a9f-9c12-b7a15e6990e3", 00:36:36.152 "aliases": [ 00:36:36.152 "lvs/nvme0n1p0" 00:36:36.152 ], 00:36:36.152 "product_name": "Logical Volume", 00:36:36.152 "block_size": 4096, 00:36:36.152 "num_blocks": 26476544, 00:36:36.152 "uuid": "781d5f3b-61be-4a9f-9c12-b7a15e6990e3", 00:36:36.152 "assigned_rate_limits": { 00:36:36.152 "rw_ios_per_sec": 0, 00:36:36.152 "rw_mbytes_per_sec": 0, 00:36:36.152 "r_mbytes_per_sec": 0, 00:36:36.152 "w_mbytes_per_sec": 0 00:36:36.152 }, 00:36:36.152 "claimed": false, 00:36:36.152 "zoned": false, 00:36:36.152 "supported_io_types": { 00:36:36.152 "read": true, 00:36:36.152 "write": true, 00:36:36.152 "unmap": true, 00:36:36.152 "flush": false, 00:36:36.152 "reset": true, 00:36:36.152 "nvme_admin": false, 00:36:36.152 "nvme_io": false, 00:36:36.152 "nvme_io_md": false, 00:36:36.152 "write_zeroes": true, 00:36:36.152 "zcopy": false, 00:36:36.152 "get_zone_info": false, 00:36:36.152 "zone_management": false, 00:36:36.152 "zone_append": false, 00:36:36.152 "compare": false, 00:36:36.152 "compare_and_write": false, 00:36:36.152 "abort": false, 00:36:36.152 "seek_hole": true, 00:36:36.152 "seek_data": true, 00:36:36.152 "copy": false, 00:36:36.152 "nvme_iov_md": false 00:36:36.152 }, 00:36:36.152 "driver_specific": { 00:36:36.152 "lvol": { 00:36:36.152 "lvol_store_uuid": "fab90042-d7b6-4e58-87d2-c747c9130f27", 00:36:36.152 "base_bdev": "nvme0n1", 00:36:36.152 "thin_provision": true, 00:36:36.152 "num_allocated_clusters": 0, 00:36:36.152 "snapshot": false, 00:36:36.152 "clone": false, 00:36:36.152 "esnap_clone": false 00:36:36.152 } 00:36:36.152 } 00:36:36.152 } 00:36:36.152 ]' 00:36:36.152 11:50:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:36:36.152 11:50:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:36:36.152 11:50:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:36:36.152 11:50:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:36:36.152 11:50:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:36:36.152 11:50:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:36:36.152 11:50:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:36:36.152 11:50:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 781d5f3b-61be-4a9f-9c12-b7a15e6990e3 --l2p_dram_limit 10' 00:36:36.153 11:50:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:36:36.153 11:50:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:36:36.153 11:50:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:36:36.153 11:50:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 781d5f3b-61be-4a9f-9c12-b7a15e6990e3 --l2p_dram_limit 10 -c nvc0n1p0 00:36:36.413 [2024-11-20 11:50:42.126599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:36.413 [2024-11-20 11:50:42.126685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:36:36.413 [2024-11-20 11:50:42.126709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:36:36.413 [2024-11-20 11:50:42.126721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.413 [2024-11-20 11:50:42.126800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:36.413 [2024-11-20 11:50:42.126817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:36.413 [2024-11-20 11:50:42.126831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:36:36.413 [2024-11-20 11:50:42.126842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.413 [2024-11-20 11:50:42.126879] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:36:36.413 [2024-11-20 11:50:42.127865] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:36:36.413 [2024-11-20 11:50:42.127922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:36.413 [2024-11-20 11:50:42.127946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:36.413 [2024-11-20 11:50:42.127961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.053 ms 00:36:36.413 [2024-11-20 11:50:42.127995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.413 [2024-11-20 11:50:42.128240] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4c8e7232-b3bc-4892-b477-51e48ee0263e 00:36:36.413 [2024-11-20 11:50:42.130239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:36.413 [2024-11-20 11:50:42.130299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:36:36.413 [2024-11-20 11:50:42.130330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:36:36.413 [2024-11-20 11:50:42.130346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.413 [2024-11-20 11:50:42.140494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:36.413 [2024-11-20 11:50:42.140581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:36.413 [2024-11-20 11:50:42.140603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.060 ms 00:36:36.413 [2024-11-20 11:50:42.140617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.413 [2024-11-20 11:50:42.140733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:36.413 [2024-11-20 11:50:42.140755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:36.413 [2024-11-20 11:50:42.140770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:36:36.413 [2024-11-20 11:50:42.140788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.413 [2024-11-20 11:50:42.140880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:36.413 [2024-11-20 11:50:42.140909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:36:36.413 [2024-11-20 11:50:42.140922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:36:36.413 [2024-11-20 11:50:42.140939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.413 [2024-11-20 11:50:42.140970] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:36:36.413 [2024-11-20 11:50:42.145860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:36.413 [2024-11-20 11:50:42.145917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:36.413 [2024-11-20 11:50:42.145952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.885 ms 00:36:36.413 [2024-11-20 11:50:42.145964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.413 [2024-11-20 11:50:42.146007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:36.413 [2024-11-20 11:50:42.146021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:36:36.413 [2024-11-20 11:50:42.146036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:36:36.413 [2024-11-20 11:50:42.146046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.413 [2024-11-20 11:50:42.146092] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:36:36.413 [2024-11-20 11:50:42.146256] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:36:36.413 [2024-11-20 11:50:42.146284] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:36:36.413 [2024-11-20 11:50:42.146299] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:36:36.413 [2024-11-20 11:50:42.146316] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:36:36.413 [2024-11-20 11:50:42.146330] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:36:36.413 [2024-11-20 11:50:42.146344] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:36:36.413 [2024-11-20 11:50:42.146355] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:36:36.413 [2024-11-20 11:50:42.146371] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:36:36.413 [2024-11-20 11:50:42.146382] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:36:36.413 [2024-11-20 11:50:42.146396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:36.413 [2024-11-20 11:50:42.146407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:36:36.413 [2024-11-20 11:50:42.146421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:36:36.413 [2024-11-20 11:50:42.146445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.413 [2024-11-20 11:50:42.146563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:36.413 [2024-11-20 11:50:42.146584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:36:36.413 [2024-11-20 11:50:42.146598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:36:36.413 [2024-11-20 11:50:42.146609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.413 [2024-11-20 11:50:42.146740] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:36:36.413 [2024-11-20 11:50:42.146758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:36:36.413 [2024-11-20 11:50:42.146772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:36:36.413 [2024-11-20 11:50:42.146783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:36.413 [2024-11-20 11:50:42.146796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:36:36.413 [2024-11-20 11:50:42.146806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:36:36.413 [2024-11-20 11:50:42.146819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:36:36.413 [2024-11-20 11:50:42.146833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:36:36.413 [2024-11-20 11:50:42.146846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:36:36.413 [2024-11-20 11:50:42.146856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:36:36.413 [2024-11-20 11:50:42.146868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:36:36.413 [2024-11-20 11:50:42.146879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:36:36.413 [2024-11-20 11:50:42.146890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:36:36.413 [2024-11-20 11:50:42.146900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:36:36.413 [2024-11-20 11:50:42.146913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:36:36.413 [2024-11-20 11:50:42.146922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:36.413 [2024-11-20 11:50:42.146939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:36:36.413 [2024-11-20 11:50:42.146949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:36:36.413 [2024-11-20 11:50:42.146961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:36.413 [2024-11-20 11:50:42.146971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:36:36.413 [2024-11-20 11:50:42.146983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:36:36.413 [2024-11-20 11:50:42.146993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:36.413 [2024-11-20 11:50:42.147016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:36:36.413 [2024-11-20 11:50:42.147026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:36:36.414 [2024-11-20 11:50:42.147047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:36.414 [2024-11-20 11:50:42.147067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:36:36.414 [2024-11-20 11:50:42.147079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:36:36.414 [2024-11-20 11:50:42.147089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:36.414 [2024-11-20 11:50:42.147101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:36:36.414 [2024-11-20 11:50:42.147111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:36:36.414 [2024-11-20 11:50:42.147125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:36.414 [2024-11-20 11:50:42.147135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:36:36.414 [2024-11-20 11:50:42.147150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:36:36.414 [2024-11-20 11:50:42.147161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:36:36.414 [2024-11-20 11:50:42.147183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:36:36.414 [2024-11-20 11:50:42.147193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:36:36.414 [2024-11-20 11:50:42.147205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:36:36.414 [2024-11-20 11:50:42.147216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:36:36.414 [2024-11-20 11:50:42.147228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:36:36.414 [2024-11-20 11:50:42.147238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:36.414 [2024-11-20 11:50:42.147251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:36:36.414 [2024-11-20 11:50:42.147261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:36:36.414 [2024-11-20 11:50:42.147274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:36.414 [2024-11-20 11:50:42.147284] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:36:36.414 [2024-11-20 11:50:42.147299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:36:36.414 [2024-11-20 11:50:42.147311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:36:36.414 [2024-11-20 11:50:42.147323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:36.414 [2024-11-20 11:50:42.147335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:36:36.414 [2024-11-20 11:50:42.147350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:36:36.414 [2024-11-20 11:50:42.147360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:36:36.414 [2024-11-20 11:50:42.147373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:36:36.414 [2024-11-20 11:50:42.147383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:36:36.414 [2024-11-20 11:50:42.147396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:36:36.414 [2024-11-20 11:50:42.147415] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:36:36.414 [2024-11-20 11:50:42.147432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:36.414 [2024-11-20 11:50:42.147446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:36:36.414 [2024-11-20 11:50:42.147459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:36:36.414 [2024-11-20 11:50:42.147470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:36:36.414 [2024-11-20 11:50:42.147483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:36:36.414 [2024-11-20 11:50:42.147494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:36:36.414 [2024-11-20 11:50:42.147509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:36:36.414 [2024-11-20 11:50:42.147519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:36:36.414 [2024-11-20 11:50:42.147553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:36:36.414 [2024-11-20 11:50:42.147584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:36:36.414 [2024-11-20 11:50:42.147600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:36:36.414 [2024-11-20 11:50:42.147621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:36:36.414 [2024-11-20 11:50:42.147636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:36:36.414 [2024-11-20 11:50:42.147647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:36:36.414 [2024-11-20 11:50:42.147661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:36:36.414 [2024-11-20 11:50:42.147672] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:36:36.414 [2024-11-20 11:50:42.147687] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:36.414 [2024-11-20 11:50:42.147699] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:36:36.414 [2024-11-20 11:50:42.147713] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:36:36.414 [2024-11-20 11:50:42.147724] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:36:36.414 [2024-11-20 11:50:42.147738] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:36:36.414 [2024-11-20 11:50:42.147749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:36.414 [2024-11-20 11:50:42.147763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:36:36.414 [2024-11-20 11:50:42.147782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.077 ms 00:36:36.414 [2024-11-20 11:50:42.147804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.414 [2024-11-20 11:50:42.147867] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:36:36.414 [2024-11-20 11:50:42.147890] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:36:39.707 [2024-11-20 11:50:45.355876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.707 [2024-11-20 11:50:45.356003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:36:39.707 [2024-11-20 11:50:45.356025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3208.019 ms 00:36:39.707 [2024-11-20 11:50:45.356040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.707 [2024-11-20 11:50:45.396567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.707 [2024-11-20 11:50:45.396649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:39.707 [2024-11-20 11:50:45.396670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.189 ms 00:36:39.707 [2024-11-20 11:50:45.396687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.707 [2024-11-20 11:50:45.396954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.707 [2024-11-20 11:50:45.396979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:36:39.707 [2024-11-20 11:50:45.396995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:36:39.707 [2024-11-20 11:50:45.397012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.707 [2024-11-20 11:50:45.442637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.707 [2024-11-20 11:50:45.442718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:39.707 [2024-11-20 11:50:45.442739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.562 ms 00:36:39.707 [2024-11-20 11:50:45.442758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.707 [2024-11-20 11:50:45.442815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.707 [2024-11-20 11:50:45.442870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:39.707 [2024-11-20 11:50:45.442886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:36:39.707 [2024-11-20 11:50:45.442905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.707 [2024-11-20 11:50:45.443682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.707 [2024-11-20 11:50:45.443751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:39.707 [2024-11-20 11:50:45.443767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.694 ms 00:36:39.707 [2024-11-20 11:50:45.443782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.707 [2024-11-20 11:50:45.443965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.707 [2024-11-20 11:50:45.443995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:39.707 [2024-11-20 11:50:45.444012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:36:39.707 [2024-11-20 11:50:45.444029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.707 [2024-11-20 11:50:45.465670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.707 [2024-11-20 11:50:45.465763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:39.707 [2024-11-20 11:50:45.465801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.603 ms 00:36:39.707 [2024-11-20 11:50:45.465814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.965 [2024-11-20 11:50:45.479489] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:36:39.965 [2024-11-20 11:50:45.483861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.965 [2024-11-20 11:50:45.483895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:36:39.965 [2024-11-20 11:50:45.483930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.855 ms 00:36:39.965 [2024-11-20 11:50:45.483942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.965 [2024-11-20 11:50:45.582687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.965 [2024-11-20 11:50:45.582755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:36:39.965 [2024-11-20 11:50:45.582795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.706 ms 00:36:39.965 [2024-11-20 11:50:45.582807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.965 [2024-11-20 11:50:45.583116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.965 [2024-11-20 11:50:45.583149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:36:39.965 [2024-11-20 11:50:45.583172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:36:39.965 [2024-11-20 11:50:45.583184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.965 [2024-11-20 11:50:45.610789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.965 [2024-11-20 11:50:45.610833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:36:39.965 [2024-11-20 11:50:45.610869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.535 ms 00:36:39.965 [2024-11-20 11:50:45.610881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.965 [2024-11-20 11:50:45.637311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.965 [2024-11-20 11:50:45.637377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:36:39.965 [2024-11-20 11:50:45.637414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.378 ms 00:36:39.965 [2024-11-20 11:50:45.637426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.965 [2024-11-20 11:50:45.638366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.965 [2024-11-20 11:50:45.638399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:36:39.965 [2024-11-20 11:50:45.638446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.893 ms 00:36:39.965 [2024-11-20 11:50:45.638473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.965 [2024-11-20 11:50:45.726964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.965 [2024-11-20 11:50:45.727023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:36:39.965 [2024-11-20 11:50:45.727048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.423 ms 00:36:39.965 [2024-11-20 11:50:45.727061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:40.222 [2024-11-20 11:50:45.757840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:40.222 [2024-11-20 11:50:45.757911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:36:40.222 [2024-11-20 11:50:45.757931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.682 ms 00:36:40.222 [2024-11-20 11:50:45.757944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:40.222 [2024-11-20 11:50:45.786652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:40.222 [2024-11-20 11:50:45.786696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:36:40.222 [2024-11-20 11:50:45.786731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.653 ms 00:36:40.222 [2024-11-20 11:50:45.786742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:40.222 [2024-11-20 11:50:45.815946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:40.222 [2024-11-20 11:50:45.815990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:36:40.222 [2024-11-20 11:50:45.816010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.135 ms 00:36:40.222 [2024-11-20 11:50:45.816021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:40.222 [2024-11-20 11:50:45.816090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:40.222 [2024-11-20 11:50:45.816109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:36:40.222 [2024-11-20 11:50:45.816144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:36:40.222 [2024-11-20 11:50:45.816155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:40.222 [2024-11-20 11:50:45.816282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:40.222 [2024-11-20 11:50:45.816303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:36:40.222 [2024-11-20 11:50:45.816323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:36:40.223 [2024-11-20 11:50:45.816334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:40.223 [2024-11-20 11:50:45.817885] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3690.678 ms, result 0 00:36:40.223 { 00:36:40.223 "name": "ftl0", 00:36:40.223 "uuid": "4c8e7232-b3bc-4892-b477-51e48ee0263e" 00:36:40.223 } 00:36:40.223 11:50:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:36:40.223 11:50:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:36:40.481 11:50:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:36:40.481 11:50:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:36:40.481 11:50:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:36:40.739 /dev/nbd0 00:36:40.739 11:50:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:36:40.739 11:50:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:36:40.739 11:50:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:36:40.739 11:50:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:36:40.740 11:50:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:36:40.740 11:50:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:36:40.740 11:50:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:36:40.740 11:50:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:36:40.740 11:50:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:36:40.740 11:50:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:36:40.740 1+0 records in 00:36:40.740 1+0 records out 00:36:40.740 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000851769 s, 4.8 MB/s 00:36:40.740 11:50:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:36:40.740 11:50:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:36:40.740 11:50:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:36:40.740 11:50:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:36:40.740 11:50:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:36:40.740 11:50:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:36:40.999 [2024-11-20 11:50:46.601227] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:36:40.999 [2024-11-20 11:50:46.602083] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81509 ] 00:36:41.258 [2024-11-20 11:50:46.787042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:41.258 [2024-11-20 11:50:46.907764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:42.632  [2024-11-20T11:50:49.334Z] Copying: 173/1024 [MB] (173 MBps) [2024-11-20T11:50:50.272Z] Copying: 346/1024 [MB] (172 MBps) [2024-11-20T11:50:51.648Z] Copying: 518/1024 [MB] (172 MBps) [2024-11-20T11:50:52.214Z] Copying: 680/1024 [MB] (161 MBps) [2024-11-20T11:50:53.597Z] Copying: 837/1024 [MB] (157 MBps) [2024-11-20T11:50:53.597Z] Copying: 997/1024 [MB] (159 MBps) [2024-11-20T11:50:54.545Z] Copying: 1024/1024 [MB] (average 166 MBps) 00:36:48.779 00:36:48.779 11:50:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:36:51.311 11:50:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:36:51.311 [2024-11-20 11:50:56.703814] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:36:51.311 [2024-11-20 11:50:56.703998] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81614 ] 00:36:51.311 [2024-11-20 11:50:56.894370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:51.311 [2024-11-20 11:50:57.045181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:52.686  [2024-11-20T11:50:59.387Z] Copying: 12/1024 [MB] (12 MBps) [2024-11-20T11:51:00.762Z] Copying: 24/1024 [MB] (12 MBps) [2024-11-20T11:51:01.698Z] Copying: 38/1024 [MB] (13 MBps) [2024-11-20T11:51:02.632Z] Copying: 52/1024 [MB] (14 MBps) [2024-11-20T11:51:03.568Z] Copying: 67/1024 [MB] (15 MBps) [2024-11-20T11:51:04.504Z] Copying: 81/1024 [MB] (14 MBps) [2024-11-20T11:51:05.440Z] Copying: 96/1024 [MB] (14 MBps) [2024-11-20T11:51:06.375Z] Copying: 111/1024 [MB] (14 MBps) [2024-11-20T11:51:07.753Z] Copying: 126/1024 [MB] (14 MBps) [2024-11-20T11:51:08.688Z] Copying: 141/1024 [MB] (14 MBps) [2024-11-20T11:51:09.626Z] Copying: 155/1024 [MB] (14 MBps) [2024-11-20T11:51:10.563Z] Copying: 169/1024 [MB] (14 MBps) [2024-11-20T11:51:11.500Z] Copying: 184/1024 [MB] (14 MBps) [2024-11-20T11:51:12.435Z] Copying: 199/1024 [MB] (14 MBps) [2024-11-20T11:51:13.370Z] Copying: 213/1024 [MB] (14 MBps) [2024-11-20T11:51:14.742Z] Copying: 228/1024 [MB] (14 MBps) [2024-11-20T11:51:15.726Z] Copying: 242/1024 [MB] (14 MBps) [2024-11-20T11:51:16.655Z] Copying: 257/1024 [MB] (14 MBps) [2024-11-20T11:51:17.588Z] Copying: 272/1024 [MB] (14 MBps) [2024-11-20T11:51:18.521Z] Copying: 287/1024 [MB] (15 MBps) [2024-11-20T11:51:19.493Z] Copying: 302/1024 [MB] (15 MBps) [2024-11-20T11:51:20.427Z] Copying: 317/1024 [MB] (14 MBps) [2024-11-20T11:51:21.801Z] Copying: 332/1024 [MB] (14 MBps) [2024-11-20T11:51:22.369Z] Copying: 345/1024 [MB] (13 MBps) [2024-11-20T11:51:23.747Z] Copying: 358/1024 [MB] (12 MBps) [2024-11-20T11:51:24.683Z] Copying: 370/1024 [MB] (11 MBps) [2024-11-20T11:51:25.620Z] Copying: 382/1024 [MB] (12 MBps) [2024-11-20T11:51:26.557Z] Copying: 394/1024 [MB] (12 MBps) [2024-11-20T11:51:27.492Z] Copying: 407/1024 [MB] (12 MBps) [2024-11-20T11:51:28.429Z] Copying: 420/1024 [MB] (13 MBps) [2024-11-20T11:51:29.368Z] Copying: 433/1024 [MB] (12 MBps) [2024-11-20T11:51:30.744Z] Copying: 446/1024 [MB] (13 MBps) [2024-11-20T11:51:31.680Z] Copying: 459/1024 [MB] (13 MBps) [2024-11-20T11:51:32.643Z] Copying: 473/1024 [MB] (13 MBps) [2024-11-20T11:51:33.579Z] Copying: 486/1024 [MB] (13 MBps) [2024-11-20T11:51:34.515Z] Copying: 498/1024 [MB] (12 MBps) [2024-11-20T11:51:35.452Z] Copying: 511/1024 [MB] (12 MBps) [2024-11-20T11:51:36.390Z] Copying: 523/1024 [MB] (12 MBps) [2024-11-20T11:51:37.767Z] Copying: 537/1024 [MB] (13 MBps) [2024-11-20T11:51:38.705Z] Copying: 551/1024 [MB] (13 MBps) [2024-11-20T11:51:39.642Z] Copying: 565/1024 [MB] (13 MBps) [2024-11-20T11:51:40.580Z] Copying: 579/1024 [MB] (14 MBps) [2024-11-20T11:51:41.517Z] Copying: 593/1024 [MB] (14 MBps) [2024-11-20T11:51:42.453Z] Copying: 607/1024 [MB] (14 MBps) [2024-11-20T11:51:43.391Z] Copying: 622/1024 [MB] (14 MBps) [2024-11-20T11:51:44.769Z] Copying: 637/1024 [MB] (14 MBps) [2024-11-20T11:51:45.705Z] Copying: 651/1024 [MB] (14 MBps) [2024-11-20T11:51:46.713Z] Copying: 667/1024 [MB] (15 MBps) [2024-11-20T11:51:47.650Z] Copying: 682/1024 [MB] (15 MBps) [2024-11-20T11:51:48.585Z] Copying: 697/1024 [MB] (14 MBps) [2024-11-20T11:51:49.521Z] Copying: 712/1024 [MB] (14 MBps) [2024-11-20T11:51:50.456Z] Copying: 725/1024 [MB] (13 MBps) [2024-11-20T11:51:51.391Z] Copying: 740/1024 [MB] (14 MBps) [2024-11-20T11:51:52.766Z] Copying: 755/1024 [MB] (14 MBps) [2024-11-20T11:51:53.700Z] Copying: 770/1024 [MB] (15 MBps) [2024-11-20T11:51:54.634Z] Copying: 785/1024 [MB] (14 MBps) [2024-11-20T11:51:55.569Z] Copying: 800/1024 [MB] (14 MBps) [2024-11-20T11:51:56.505Z] Copying: 815/1024 [MB] (14 MBps) [2024-11-20T11:51:57.440Z] Copying: 830/1024 [MB] (15 MBps) [2024-11-20T11:51:58.374Z] Copying: 845/1024 [MB] (14 MBps) [2024-11-20T11:51:59.751Z] Copying: 860/1024 [MB] (14 MBps) [2024-11-20T11:52:00.688Z] Copying: 875/1024 [MB] (14 MBps) [2024-11-20T11:52:01.658Z] Copying: 889/1024 [MB] (14 MBps) [2024-11-20T11:52:02.594Z] Copying: 904/1024 [MB] (14 MBps) [2024-11-20T11:52:03.531Z] Copying: 919/1024 [MB] (14 MBps) [2024-11-20T11:52:04.468Z] Copying: 934/1024 [MB] (14 MBps) [2024-11-20T11:52:05.403Z] Copying: 949/1024 [MB] (14 MBps) [2024-11-20T11:52:06.779Z] Copying: 963/1024 [MB] (14 MBps) [2024-11-20T11:52:07.715Z] Copying: 978/1024 [MB] (14 MBps) [2024-11-20T11:52:08.650Z] Copying: 992/1024 [MB] (14 MBps) [2024-11-20T11:52:09.584Z] Copying: 1007/1024 [MB] (14 MBps) [2024-11-20T11:52:09.584Z] Copying: 1021/1024 [MB] (14 MBps) [2024-11-20T11:52:10.518Z] Copying: 1024/1024 [MB] (average 14 MBps) 00:38:04.752 00:38:04.752 11:52:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:38:04.752 11:52:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:38:05.318 11:52:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:38:05.318 [2024-11-20 11:52:11.069821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:05.318 [2024-11-20 11:52:11.069885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:38:05.319 [2024-11-20 11:52:11.069907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:38:05.319 [2024-11-20 11:52:11.069922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.319 [2024-11-20 11:52:11.069964] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:38:05.319 [2024-11-20 11:52:11.073874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:05.319 [2024-11-20 11:52:11.073906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:38:05.319 [2024-11-20 11:52:11.073924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.878 ms 00:38:05.319 [2024-11-20 11:52:11.073935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.319 [2024-11-20 11:52:11.076159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:05.319 [2024-11-20 11:52:11.076198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:38:05.319 [2024-11-20 11:52:11.076217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.183 ms 00:38:05.319 [2024-11-20 11:52:11.076236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.580 [2024-11-20 11:52:11.092222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:05.580 [2024-11-20 11:52:11.092293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:38:05.580 [2024-11-20 11:52:11.092321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.956 ms 00:38:05.580 [2024-11-20 11:52:11.092335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.580 [2024-11-20 11:52:11.098379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:05.580 [2024-11-20 11:52:11.098411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:38:05.580 [2024-11-20 11:52:11.098438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.996 ms 00:38:05.580 [2024-11-20 11:52:11.098457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.580 [2024-11-20 11:52:11.128782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:05.580 [2024-11-20 11:52:11.128822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:38:05.580 [2024-11-20 11:52:11.128841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.200 ms 00:38:05.580 [2024-11-20 11:52:11.128863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.580 [2024-11-20 11:52:11.148575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:05.580 [2024-11-20 11:52:11.148623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:38:05.580 [2024-11-20 11:52:11.148645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.655 ms 00:38:05.580 [2024-11-20 11:52:11.148660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.580 [2024-11-20 11:52:11.148843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:05.580 [2024-11-20 11:52:11.148864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:38:05.580 [2024-11-20 11:52:11.148880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:38:05.580 [2024-11-20 11:52:11.148908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.580 [2024-11-20 11:52:11.180076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:05.580 [2024-11-20 11:52:11.180115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:38:05.580 [2024-11-20 11:52:11.180134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.139 ms 00:38:05.580 [2024-11-20 11:52:11.180145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.580 [2024-11-20 11:52:11.210015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:05.580 [2024-11-20 11:52:11.210053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:38:05.580 [2024-11-20 11:52:11.210073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.781 ms 00:38:05.580 [2024-11-20 11:52:11.210087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.580 [2024-11-20 11:52:11.237870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:05.580 [2024-11-20 11:52:11.237955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:38:05.580 [2024-11-20 11:52:11.237981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.714 ms 00:38:05.580 [2024-11-20 11:52:11.237993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.580 [2024-11-20 11:52:11.267250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:05.580 [2024-11-20 11:52:11.267364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:38:05.580 [2024-11-20 11:52:11.267409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.014 ms 00:38:05.580 [2024-11-20 11:52:11.267421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.580 [2024-11-20 11:52:11.267551] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:38:05.580 [2024-11-20 11:52:11.267598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.267618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.267631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.267646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.267660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.267676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.267697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.267717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.267730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.267746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.267759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.267774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.267786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.267801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.267813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.267828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.267840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.267856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.267870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.267889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.267902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.267933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.267944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.267961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.267973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.267987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.268002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.268016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.268029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.268044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.268056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.268070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.268082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.268097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.268120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.268138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.268150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.268165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.268178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.268211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:38:05.580 [2024-11-20 11:52:11.268223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.268985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.269000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.269012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.269026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.269038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.269054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.269066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.269080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.269092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.269108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.269120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.269136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:38:05.581 [2024-11-20 11:52:11.269159] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:38:05.581 [2024-11-20 11:52:11.269174] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4c8e7232-b3bc-4892-b477-51e48ee0263e 00:38:05.581 [2024-11-20 11:52:11.269186] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:38:05.581 [2024-11-20 11:52:11.269204] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:38:05.581 [2024-11-20 11:52:11.269215] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:38:05.581 [2024-11-20 11:52:11.269234] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:38:05.581 [2024-11-20 11:52:11.269245] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:38:05.581 [2024-11-20 11:52:11.269259] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:38:05.581 [2024-11-20 11:52:11.269271] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:38:05.581 [2024-11-20 11:52:11.269283] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:38:05.581 [2024-11-20 11:52:11.269292] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:38:05.581 [2024-11-20 11:52:11.269306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:05.581 [2024-11-20 11:52:11.269324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:38:05.581 [2024-11-20 11:52:11.269339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.779 ms 00:38:05.581 [2024-11-20 11:52:11.269350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.581 [2024-11-20 11:52:11.286630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:05.581 [2024-11-20 11:52:11.286748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:38:05.581 [2024-11-20 11:52:11.286780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.112 ms 00:38:05.581 [2024-11-20 11:52:11.286794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.581 [2024-11-20 11:52:11.287342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:05.581 [2024-11-20 11:52:11.287373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:38:05.581 [2024-11-20 11:52:11.287391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.471 ms 00:38:05.581 [2024-11-20 11:52:11.287412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.840 [2024-11-20 11:52:11.342675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:05.840 [2024-11-20 11:52:11.342802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:05.840 [2024-11-20 11:52:11.342827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:05.840 [2024-11-20 11:52:11.342840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.840 [2024-11-20 11:52:11.342959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:05.840 [2024-11-20 11:52:11.342978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:05.840 [2024-11-20 11:52:11.343000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:05.840 [2024-11-20 11:52:11.343012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.840 [2024-11-20 11:52:11.343233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:05.840 [2024-11-20 11:52:11.343264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:05.840 [2024-11-20 11:52:11.343288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:05.840 [2024-11-20 11:52:11.343308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.840 [2024-11-20 11:52:11.343383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:05.840 [2024-11-20 11:52:11.343407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:05.840 [2024-11-20 11:52:11.343423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:05.840 [2024-11-20 11:52:11.343434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.840 [2024-11-20 11:52:11.456731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:05.840 [2024-11-20 11:52:11.456822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:05.840 [2024-11-20 11:52:11.456846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:05.840 [2024-11-20 11:52:11.456859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.840 [2024-11-20 11:52:11.549099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:05.840 [2024-11-20 11:52:11.549173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:05.840 [2024-11-20 11:52:11.549228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:05.840 [2024-11-20 11:52:11.549242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.840 [2024-11-20 11:52:11.549424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:05.840 [2024-11-20 11:52:11.549453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:05.840 [2024-11-20 11:52:11.549472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:05.840 [2024-11-20 11:52:11.549490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.841 [2024-11-20 11:52:11.549602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:05.841 [2024-11-20 11:52:11.549625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:05.841 [2024-11-20 11:52:11.549658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:05.841 [2024-11-20 11:52:11.549685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.841 [2024-11-20 11:52:11.549856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:05.841 [2024-11-20 11:52:11.549883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:05.841 [2024-11-20 11:52:11.549900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:05.841 [2024-11-20 11:52:11.549911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.841 [2024-11-20 11:52:11.549974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:05.841 [2024-11-20 11:52:11.549999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:38:05.841 [2024-11-20 11:52:11.550014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:05.841 [2024-11-20 11:52:11.550027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.841 [2024-11-20 11:52:11.550085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:05.841 [2024-11-20 11:52:11.550100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:05.841 [2024-11-20 11:52:11.550114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:05.841 [2024-11-20 11:52:11.550125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.841 [2024-11-20 11:52:11.550230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:05.841 [2024-11-20 11:52:11.550256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:05.841 [2024-11-20 11:52:11.550290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:05.841 [2024-11-20 11:52:11.550305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.841 [2024-11-20 11:52:11.550522] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 480.650 ms, result 0 00:38:05.841 true 00:38:05.841 11:52:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81348 00:38:05.841 11:52:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81348 00:38:05.841 11:52:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:38:06.099 [2024-11-20 11:52:11.690640] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:38:06.099 [2024-11-20 11:52:11.690836] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82381 ] 00:38:06.357 [2024-11-20 11:52:11.872480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:06.357 [2024-11-20 11:52:12.002158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:07.733  [2024-11-20T11:52:14.497Z] Copying: 171/1024 [MB] (171 MBps) [2024-11-20T11:52:15.433Z] Copying: 340/1024 [MB] (169 MBps) [2024-11-20T11:52:16.370Z] Copying: 512/1024 [MB] (171 MBps) [2024-11-20T11:52:17.746Z] Copying: 688/1024 [MB] (175 MBps) [2024-11-20T11:52:18.682Z] Copying: 853/1024 [MB] (165 MBps) [2024-11-20T11:52:19.616Z] Copying: 1024/1024 [MB] (average 170 MBps) 00:38:13.850 00:38:13.850 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81348 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:38:13.850 11:52:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:38:13.850 [2024-11-20 11:52:19.515762] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:38:13.850 [2024-11-20 11:52:19.515972] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82456 ] 00:38:14.108 [2024-11-20 11:52:19.692200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:14.108 [2024-11-20 11:52:19.835887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:14.674 [2024-11-20 11:52:20.225263] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:38:14.674 [2024-11-20 11:52:20.225370] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:38:14.674 [2024-11-20 11:52:20.293625] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:38:14.674 [2024-11-20 11:52:20.294022] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:38:14.674 [2024-11-20 11:52:20.294294] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:38:14.934 [2024-11-20 11:52:20.569197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:14.934 [2024-11-20 11:52:20.569259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:38:14.934 [2024-11-20 11:52:20.569282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:38:14.934 [2024-11-20 11:52:20.569295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:14.934 [2024-11-20 11:52:20.569381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:14.934 [2024-11-20 11:52:20.569403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:14.934 [2024-11-20 11:52:20.569429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:38:14.934 [2024-11-20 11:52:20.569441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:14.934 [2024-11-20 11:52:20.569475] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:38:14.934 [2024-11-20 11:52:20.570382] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:38:14.934 [2024-11-20 11:52:20.570429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:14.934 [2024-11-20 11:52:20.570444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:14.934 [2024-11-20 11:52:20.570458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.953 ms 00:38:14.934 [2024-11-20 11:52:20.570470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:14.934 [2024-11-20 11:52:20.572955] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:38:14.934 [2024-11-20 11:52:20.590319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:14.934 [2024-11-20 11:52:20.590370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:38:14.934 [2024-11-20 11:52:20.590393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.366 ms 00:38:14.934 [2024-11-20 11:52:20.590407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:14.934 [2024-11-20 11:52:20.590481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:14.934 [2024-11-20 11:52:20.590503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:38:14.934 [2024-11-20 11:52:20.590518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:38:14.934 [2024-11-20 11:52:20.590530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:14.934 [2024-11-20 11:52:20.602312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:14.934 [2024-11-20 11:52:20.602360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:14.934 [2024-11-20 11:52:20.602379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.665 ms 00:38:14.934 [2024-11-20 11:52:20.602393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:14.934 [2024-11-20 11:52:20.602506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:14.934 [2024-11-20 11:52:20.602527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:14.934 [2024-11-20 11:52:20.602560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:38:14.934 [2024-11-20 11:52:20.602580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:14.934 [2024-11-20 11:52:20.602673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:14.934 [2024-11-20 11:52:20.602701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:38:14.934 [2024-11-20 11:52:20.602716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:38:14.934 [2024-11-20 11:52:20.602729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:14.934 [2024-11-20 11:52:20.602769] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:38:14.934 [2024-11-20 11:52:20.608298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:14.934 [2024-11-20 11:52:20.608336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:14.934 [2024-11-20 11:52:20.608353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.540 ms 00:38:14.934 [2024-11-20 11:52:20.608366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:14.934 [2024-11-20 11:52:20.608409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:14.934 [2024-11-20 11:52:20.608426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:38:14.934 [2024-11-20 11:52:20.608441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:38:14.934 [2024-11-20 11:52:20.608453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:14.934 [2024-11-20 11:52:20.608502] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:38:14.934 [2024-11-20 11:52:20.608558] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:38:14.934 [2024-11-20 11:52:20.608618] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:38:14.934 [2024-11-20 11:52:20.608642] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:38:14.934 [2024-11-20 11:52:20.608759] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:38:14.934 [2024-11-20 11:52:20.608777] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:38:14.934 [2024-11-20 11:52:20.608793] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:38:14.934 [2024-11-20 11:52:20.608810] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:38:14.934 [2024-11-20 11:52:20.608832] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:38:14.934 [2024-11-20 11:52:20.608846] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:38:14.934 [2024-11-20 11:52:20.608858] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:38:14.934 [2024-11-20 11:52:20.608871] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:38:14.934 [2024-11-20 11:52:20.608897] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:38:14.934 [2024-11-20 11:52:20.608911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:14.934 [2024-11-20 11:52:20.608923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:38:14.934 [2024-11-20 11:52:20.608936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:38:14.934 [2024-11-20 11:52:20.608949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:14.934 [2024-11-20 11:52:20.609049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:14.934 [2024-11-20 11:52:20.609073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:38:14.934 [2024-11-20 11:52:20.609088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:38:14.934 [2024-11-20 11:52:20.609100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:14.934 [2024-11-20 11:52:20.609227] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:38:14.934 [2024-11-20 11:52:20.609257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:38:14.934 [2024-11-20 11:52:20.609273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:14.934 [2024-11-20 11:52:20.609286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:14.934 [2024-11-20 11:52:20.609299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:38:14.934 [2024-11-20 11:52:20.609310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:38:14.934 [2024-11-20 11:52:20.609322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:38:14.934 [2024-11-20 11:52:20.609333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:38:14.934 [2024-11-20 11:52:20.609344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:38:14.934 [2024-11-20 11:52:20.609366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:14.934 [2024-11-20 11:52:20.609381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:38:14.934 [2024-11-20 11:52:20.609409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:38:14.934 [2024-11-20 11:52:20.609421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:14.935 [2024-11-20 11:52:20.609432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:38:14.935 [2024-11-20 11:52:20.609445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:38:14.935 [2024-11-20 11:52:20.609457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:14.935 [2024-11-20 11:52:20.609469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:38:14.935 [2024-11-20 11:52:20.609481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:38:14.935 [2024-11-20 11:52:20.609493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:14.935 [2024-11-20 11:52:20.609504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:38:14.935 [2024-11-20 11:52:20.609515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:38:14.935 [2024-11-20 11:52:20.609526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:14.935 [2024-11-20 11:52:20.609554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:38:14.935 [2024-11-20 11:52:20.609568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:38:14.935 [2024-11-20 11:52:20.609579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:14.935 [2024-11-20 11:52:20.609590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:38:14.935 [2024-11-20 11:52:20.609601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:38:14.935 [2024-11-20 11:52:20.609612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:14.935 [2024-11-20 11:52:20.609623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:38:14.935 [2024-11-20 11:52:20.609634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:38:14.935 [2024-11-20 11:52:20.609645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:14.935 [2024-11-20 11:52:20.609657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:38:14.935 [2024-11-20 11:52:20.609668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:38:14.935 [2024-11-20 11:52:20.609679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:14.935 [2024-11-20 11:52:20.609691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:38:14.935 [2024-11-20 11:52:20.609702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:38:14.935 [2024-11-20 11:52:20.609713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:14.935 [2024-11-20 11:52:20.609724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:38:14.935 [2024-11-20 11:52:20.609736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:38:14.935 [2024-11-20 11:52:20.609747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:14.935 [2024-11-20 11:52:20.609758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:38:14.935 [2024-11-20 11:52:20.609770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:38:14.935 [2024-11-20 11:52:20.609781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:14.935 [2024-11-20 11:52:20.609793] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:38:14.935 [2024-11-20 11:52:20.609806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:38:14.935 [2024-11-20 11:52:20.609819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:14.935 [2024-11-20 11:52:20.609838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:14.935 [2024-11-20 11:52:20.609852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:38:14.935 [2024-11-20 11:52:20.609864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:38:14.935 [2024-11-20 11:52:20.609876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:38:14.935 [2024-11-20 11:52:20.609889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:38:14.935 [2024-11-20 11:52:20.609900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:38:14.935 [2024-11-20 11:52:20.609911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:38:14.935 [2024-11-20 11:52:20.609925] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:38:14.935 [2024-11-20 11:52:20.609940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:14.935 [2024-11-20 11:52:20.609956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:38:14.935 [2024-11-20 11:52:20.609968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:38:14.935 [2024-11-20 11:52:20.609981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:38:14.935 [2024-11-20 11:52:20.609993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:38:14.935 [2024-11-20 11:52:20.610005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:38:14.935 [2024-11-20 11:52:20.610017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:38:14.935 [2024-11-20 11:52:20.610030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:38:14.935 [2024-11-20 11:52:20.610042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:38:14.935 [2024-11-20 11:52:20.610054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:38:14.935 [2024-11-20 11:52:20.610066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:38:14.935 [2024-11-20 11:52:20.610078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:38:14.935 [2024-11-20 11:52:20.610090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:38:14.935 [2024-11-20 11:52:20.610103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:38:14.935 [2024-11-20 11:52:20.610115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:38:14.935 [2024-11-20 11:52:20.610127] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:38:14.935 [2024-11-20 11:52:20.610142] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:14.935 [2024-11-20 11:52:20.610155] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:38:14.935 [2024-11-20 11:52:20.610168] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:38:14.935 [2024-11-20 11:52:20.610180] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:38:14.935 [2024-11-20 11:52:20.610202] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:38:14.935 [2024-11-20 11:52:20.610216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:14.935 [2024-11-20 11:52:20.610229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:38:14.935 [2024-11-20 11:52:20.610242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.058 ms 00:38:14.935 [2024-11-20 11:52:20.610254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:14.935 [2024-11-20 11:52:20.656202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:14.935 [2024-11-20 11:52:20.656274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:14.935 [2024-11-20 11:52:20.656295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.874 ms 00:38:14.935 [2024-11-20 11:52:20.656309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:14.935 [2024-11-20 11:52:20.656441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:14.935 [2024-11-20 11:52:20.656466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:38:14.935 [2024-11-20 11:52:20.656480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:38:14.935 [2024-11-20 11:52:20.656493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:15.194 [2024-11-20 11:52:20.720505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:15.194 [2024-11-20 11:52:20.720586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:15.194 [2024-11-20 11:52:20.720610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.880 ms 00:38:15.194 [2024-11-20 11:52:20.720630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:15.194 [2024-11-20 11:52:20.720709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:15.194 [2024-11-20 11:52:20.720744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:15.194 [2024-11-20 11:52:20.720759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:38:15.194 [2024-11-20 11:52:20.720772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:15.194 [2024-11-20 11:52:20.721679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:15.194 [2024-11-20 11:52:20.721714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:15.194 [2024-11-20 11:52:20.721731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.770 ms 00:38:15.194 [2024-11-20 11:52:20.721744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:15.194 [2024-11-20 11:52:20.721950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:15.194 [2024-11-20 11:52:20.721972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:15.194 [2024-11-20 11:52:20.721985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:38:15.194 [2024-11-20 11:52:20.722007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:15.194 [2024-11-20 11:52:20.743852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:15.194 [2024-11-20 11:52:20.743898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:15.194 [2024-11-20 11:52:20.743917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.814 ms 00:38:15.194 [2024-11-20 11:52:20.743930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:15.194 [2024-11-20 11:52:20.761426] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:38:15.194 [2024-11-20 11:52:20.761470] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:38:15.194 [2024-11-20 11:52:20.761490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:15.194 [2024-11-20 11:52:20.761504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:38:15.194 [2024-11-20 11:52:20.761519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.399 ms 00:38:15.194 [2024-11-20 11:52:20.761550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:15.194 [2024-11-20 11:52:20.791280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:15.194 [2024-11-20 11:52:20.791324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:38:15.194 [2024-11-20 11:52:20.791359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.675 ms 00:38:15.194 [2024-11-20 11:52:20.791373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:15.194 [2024-11-20 11:52:20.806937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:15.194 [2024-11-20 11:52:20.806978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:38:15.194 [2024-11-20 11:52:20.806995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.513 ms 00:38:15.194 [2024-11-20 11:52:20.807008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:15.194 [2024-11-20 11:52:20.822143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:15.194 [2024-11-20 11:52:20.822183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:38:15.194 [2024-11-20 11:52:20.822200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.088 ms 00:38:15.194 [2024-11-20 11:52:20.822212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:15.194 [2024-11-20 11:52:20.823066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:15.194 [2024-11-20 11:52:20.823103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:38:15.194 [2024-11-20 11:52:20.823119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:38:15.194 [2024-11-20 11:52:20.823132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:15.194 [2024-11-20 11:52:20.906599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:15.194 [2024-11-20 11:52:20.906688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:38:15.194 [2024-11-20 11:52:20.906712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.431 ms 00:38:15.194 [2024-11-20 11:52:20.906727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:15.194 [2024-11-20 11:52:20.919054] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:38:15.194 [2024-11-20 11:52:20.922077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:15.194 [2024-11-20 11:52:20.922113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:38:15.194 [2024-11-20 11:52:20.922132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.269 ms 00:38:15.194 [2024-11-20 11:52:20.922145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:15.194 [2024-11-20 11:52:20.922268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:15.194 [2024-11-20 11:52:20.922291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:38:15.194 [2024-11-20 11:52:20.922306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:38:15.195 [2024-11-20 11:52:20.922319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:15.195 [2024-11-20 11:52:20.922475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:15.195 [2024-11-20 11:52:20.922505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:38:15.195 [2024-11-20 11:52:20.922521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:38:15.195 [2024-11-20 11:52:20.922549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:15.195 [2024-11-20 11:52:20.922590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:15.195 [2024-11-20 11:52:20.922616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:38:15.195 [2024-11-20 11:52:20.922630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:38:15.195 [2024-11-20 11:52:20.922643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:15.195 [2024-11-20 11:52:20.922693] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:38:15.195 [2024-11-20 11:52:20.922713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:15.195 [2024-11-20 11:52:20.922726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:38:15.195 [2024-11-20 11:52:20.922739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:38:15.195 [2024-11-20 11:52:20.922751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:15.195 [2024-11-20 11:52:20.954315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:15.195 [2024-11-20 11:52:20.954362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:38:15.195 [2024-11-20 11:52:20.954382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.524 ms 00:38:15.195 [2024-11-20 11:52:20.954396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:15.195 [2024-11-20 11:52:20.954497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:15.195 [2024-11-20 11:52:20.954518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:38:15.195 [2024-11-20 11:52:20.954547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:38:15.195 [2024-11-20 11:52:20.954563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:15.195 [2024-11-20 11:52:20.956281] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 386.436 ms, result 0 00:38:16.570  [2024-11-20T11:52:23.271Z] Copying: 23/1024 [MB] (23 MBps) [2024-11-20T11:52:24.207Z] Copying: 46/1024 [MB] (23 MBps) [2024-11-20T11:52:25.143Z] Copying: 69/1024 [MB] (23 MBps) [2024-11-20T11:52:26.079Z] Copying: 92/1024 [MB] (22 MBps) [2024-11-20T11:52:27.016Z] Copying: 114/1024 [MB] (22 MBps) [2024-11-20T11:52:28.411Z] Copying: 137/1024 [MB] (23 MBps) [2024-11-20T11:52:28.980Z] Copying: 160/1024 [MB] (23 MBps) [2024-11-20T11:52:30.355Z] Copying: 184/1024 [MB] (23 MBps) [2024-11-20T11:52:31.289Z] Copying: 209/1024 [MB] (25 MBps) [2024-11-20T11:52:32.224Z] Copying: 234/1024 [MB] (24 MBps) [2024-11-20T11:52:33.159Z] Copying: 258/1024 [MB] (23 MBps) [2024-11-20T11:52:34.096Z] Copying: 281/1024 [MB] (23 MBps) [2024-11-20T11:52:35.032Z] Copying: 305/1024 [MB] (23 MBps) [2024-11-20T11:52:36.410Z] Copying: 330/1024 [MB] (24 MBps) [2024-11-20T11:52:36.977Z] Copying: 355/1024 [MB] (25 MBps) [2024-11-20T11:52:38.356Z] Copying: 380/1024 [MB] (24 MBps) [2024-11-20T11:52:39.298Z] Copying: 404/1024 [MB] (24 MBps) [2024-11-20T11:52:40.237Z] Copying: 429/1024 [MB] (24 MBps) [2024-11-20T11:52:41.175Z] Copying: 453/1024 [MB] (24 MBps) [2024-11-20T11:52:42.112Z] Copying: 477/1024 [MB] (24 MBps) [2024-11-20T11:52:43.049Z] Copying: 501/1024 [MB] (23 MBps) [2024-11-20T11:52:43.987Z] Copying: 525/1024 [MB] (23 MBps) [2024-11-20T11:52:45.365Z] Copying: 549/1024 [MB] (23 MBps) [2024-11-20T11:52:46.302Z] Copying: 572/1024 [MB] (23 MBps) [2024-11-20T11:52:47.240Z] Copying: 596/1024 [MB] (23 MBps) [2024-11-20T11:52:48.176Z] Copying: 620/1024 [MB] (23 MBps) [2024-11-20T11:52:49.148Z] Copying: 643/1024 [MB] (23 MBps) [2024-11-20T11:52:50.100Z] Copying: 667/1024 [MB] (23 MBps) [2024-11-20T11:52:51.035Z] Copying: 691/1024 [MB] (23 MBps) [2024-11-20T11:52:51.971Z] Copying: 713/1024 [MB] (22 MBps) [2024-11-20T11:52:53.349Z] Copying: 735/1024 [MB] (21 MBps) [2024-11-20T11:52:54.285Z] Copying: 757/1024 [MB] (22 MBps) [2024-11-20T11:52:55.222Z] Copying: 780/1024 [MB] (22 MBps) [2024-11-20T11:52:56.159Z] Copying: 805/1024 [MB] (25 MBps) [2024-11-20T11:52:57.096Z] Copying: 830/1024 [MB] (25 MBps) [2024-11-20T11:52:58.075Z] Copying: 856/1024 [MB] (25 MBps) [2024-11-20T11:52:59.013Z] Copying: 880/1024 [MB] (23 MBps) [2024-11-20T11:53:00.390Z] Copying: 903/1024 [MB] (23 MBps) [2024-11-20T11:53:01.327Z] Copying: 927/1024 [MB] (24 MBps) [2024-11-20T11:53:02.264Z] Copying: 952/1024 [MB] (24 MBps) [2024-11-20T11:53:03.201Z] Copying: 975/1024 [MB] (23 MBps) [2024-11-20T11:53:04.137Z] Copying: 998/1024 [MB] (23 MBps) [2024-11-20T11:53:05.074Z] Copying: 1020/1024 [MB] (22 MBps) [2024-11-20T11:53:05.642Z] Copying: 1048292/1048576 [kB] (2876 kBps) [2024-11-20T11:53:05.642Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-20 11:53:05.360304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.876 [2024-11-20 11:53:05.360681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:38:59.876 [2024-11-20 11:53:05.360829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:38:59.876 [2024-11-20 11:53:05.360884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.876 [2024-11-20 11:53:05.363704] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:38:59.876 [2024-11-20 11:53:05.371843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.876 [2024-11-20 11:53:05.371999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:38:59.876 [2024-11-20 11:53:05.372155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.871 ms 00:38:59.876 [2024-11-20 11:53:05.372337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.876 [2024-11-20 11:53:05.384958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.876 [2024-11-20 11:53:05.385153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:38:59.876 [2024-11-20 11:53:05.385294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.390 ms 00:38:59.876 [2024-11-20 11:53:05.385332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.876 [2024-11-20 11:53:05.408936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.876 [2024-11-20 11:53:05.409198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:38:59.876 [2024-11-20 11:53:05.409228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.545 ms 00:38:59.876 [2024-11-20 11:53:05.409242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.876 [2024-11-20 11:53:05.416019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.876 [2024-11-20 11:53:05.416063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:38:59.876 [2024-11-20 11:53:05.416084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.732 ms 00:38:59.876 [2024-11-20 11:53:05.416096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.877 [2024-11-20 11:53:05.451101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.877 [2024-11-20 11:53:05.451171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:38:59.877 [2024-11-20 11:53:05.451189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.936 ms 00:38:59.877 [2024-11-20 11:53:05.451201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.877 [2024-11-20 11:53:05.471651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.877 [2024-11-20 11:53:05.471698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:38:59.877 [2024-11-20 11:53:05.471721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.406 ms 00:38:59.877 [2024-11-20 11:53:05.471733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.877 [2024-11-20 11:53:05.591449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.877 [2024-11-20 11:53:05.591493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:38:59.877 [2024-11-20 11:53:05.591515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 119.671 ms 00:38:59.877 [2024-11-20 11:53:05.591554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.877 [2024-11-20 11:53:05.616895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.877 [2024-11-20 11:53:05.616932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:38:59.877 [2024-11-20 11:53:05.616947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.318 ms 00:38:59.877 [2024-11-20 11:53:05.616958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.137 [2024-11-20 11:53:05.642517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.137 [2024-11-20 11:53:05.642568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:39:00.137 [2024-11-20 11:53:05.642594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.510 ms 00:39:00.137 [2024-11-20 11:53:05.642604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.137 [2024-11-20 11:53:05.669713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.137 [2024-11-20 11:53:05.669770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:39:00.137 [2024-11-20 11:53:05.669788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.068 ms 00:39:00.137 [2024-11-20 11:53:05.669801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.137 [2024-11-20 11:53:05.701039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.137 [2024-11-20 11:53:05.701088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:39:00.137 [2024-11-20 11:53:05.701113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.138 ms 00:39:00.137 [2024-11-20 11:53:05.701124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.137 [2024-11-20 11:53:05.701166] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:39:00.137 [2024-11-20 11:53:05.701189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129280 / 261120 wr_cnt: 1 state: open 00:39:00.137 [2024-11-20 11:53:05.701204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:39:00.137 [2024-11-20 11:53:05.701216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:39:00.137 [2024-11-20 11:53:05.701228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:39:00.137 [2024-11-20 11:53:05.701241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:39:00.137 [2024-11-20 11:53:05.701252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:39:00.137 [2024-11-20 11:53:05.701264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:39:00.137 [2024-11-20 11:53:05.701275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:39:00.137 [2024-11-20 11:53:05.701287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:39:00.137 [2024-11-20 11:53:05.701315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:39:00.137 [2024-11-20 11:53:05.701328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:39:00.137 [2024-11-20 11:53:05.701341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:39:00.137 [2024-11-20 11:53:05.701354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:39:00.137 [2024-11-20 11:53:05.701367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:39:00.137 [2024-11-20 11:53:05.701389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:39:00.137 [2024-11-20 11:53:05.701410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:39:00.137 [2024-11-20 11:53:05.701423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:39:00.137 [2024-11-20 11:53:05.701441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:39:00.137 [2024-11-20 11:53:05.701453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:39:00.137 [2024-11-20 11:53:05.701466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:39:00.137 [2024-11-20 11:53:05.701479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:39:00.137 [2024-11-20 11:53:05.701492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:39:00.137 [2024-11-20 11:53:05.701505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:39:00.137 [2024-11-20 11:53:05.701518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:39:00.137 [2024-11-20 11:53:05.701543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:39:00.137 [2024-11-20 11:53:05.701560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:39:00.137 [2024-11-20 11:53:05.701573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:39:00.137 [2024-11-20 11:53:05.701586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:39:00.137 [2024-11-20 11:53:05.701598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:39:00.137 [2024-11-20 11:53:05.701611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:39:00.137 [2024-11-20 11:53:05.701625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:39:00.137 [2024-11-20 11:53:05.701640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:39:00.137 [2024-11-20 11:53:05.701653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:39:00.137 [2024-11-20 11:53:05.701666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.701679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.701692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.701705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.701723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.701745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.701758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.701770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.701783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.701795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.701808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.701820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.701862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.701874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.701885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.701896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.701908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.701919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.701932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.701944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.701965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.701977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.701989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:39:00.138 [2024-11-20 11:53:05.702677] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:39:00.138 [2024-11-20 11:53:05.702689] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4c8e7232-b3bc-4892-b477-51e48ee0263e 00:39:00.138 [2024-11-20 11:53:05.702701] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129280 00:39:00.138 [2024-11-20 11:53:05.702718] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 130240 00:39:00.138 [2024-11-20 11:53:05.702741] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129280 00:39:00.138 [2024-11-20 11:53:05.702753] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:39:00.138 [2024-11-20 11:53:05.702763] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:39:00.138 [2024-11-20 11:53:05.702774] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:39:00.138 [2024-11-20 11:53:05.702785] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:39:00.138 [2024-11-20 11:53:05.702794] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:39:00.138 [2024-11-20 11:53:05.702819] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:39:00.138 [2024-11-20 11:53:05.702830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.138 [2024-11-20 11:53:05.702841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:39:00.138 [2024-11-20 11:53:05.702852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.666 ms 00:39:00.138 [2024-11-20 11:53:05.702879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.138 [2024-11-20 11:53:05.721331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.138 [2024-11-20 11:53:05.721384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:39:00.138 [2024-11-20 11:53:05.721412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.396 ms 00:39:00.138 [2024-11-20 11:53:05.721446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.138 [2024-11-20 11:53:05.722008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.138 [2024-11-20 11:53:05.722040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:39:00.138 [2024-11-20 11:53:05.722055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:39:00.138 [2024-11-20 11:53:05.722068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.138 [2024-11-20 11:53:05.772269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:00.138 [2024-11-20 11:53:05.772321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:00.138 [2024-11-20 11:53:05.772339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:00.138 [2024-11-20 11:53:05.772352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.138 [2024-11-20 11:53:05.772428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:00.138 [2024-11-20 11:53:05.772467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:00.138 [2024-11-20 11:53:05.772496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:00.139 [2024-11-20 11:53:05.772508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.139 [2024-11-20 11:53:05.772654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:00.139 [2024-11-20 11:53:05.772677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:00.139 [2024-11-20 11:53:05.772693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:00.139 [2024-11-20 11:53:05.772734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.139 [2024-11-20 11:53:05.772774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:00.139 [2024-11-20 11:53:05.772803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:00.139 [2024-11-20 11:53:05.772815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:00.139 [2024-11-20 11:53:05.772826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.139 [2024-11-20 11:53:05.889920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:00.139 [2024-11-20 11:53:05.890033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:00.139 [2024-11-20 11:53:05.890055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:00.139 [2024-11-20 11:53:05.890068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.398 [2024-11-20 11:53:05.984478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:00.398 [2024-11-20 11:53:05.984577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:00.398 [2024-11-20 11:53:05.984603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:00.398 [2024-11-20 11:53:05.984615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.398 [2024-11-20 11:53:05.984805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:00.398 [2024-11-20 11:53:05.984825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:00.398 [2024-11-20 11:53:05.984839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:00.398 [2024-11-20 11:53:05.984857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.398 [2024-11-20 11:53:05.984920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:00.398 [2024-11-20 11:53:05.984945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:00.398 [2024-11-20 11:53:05.984960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:00.398 [2024-11-20 11:53:05.984972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.398 [2024-11-20 11:53:05.985121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:00.398 [2024-11-20 11:53:05.985149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:00.398 [2024-11-20 11:53:05.985162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:00.398 [2024-11-20 11:53:05.985174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.398 [2024-11-20 11:53:05.985244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:00.398 [2024-11-20 11:53:05.985264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:39:00.398 [2024-11-20 11:53:05.985277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:00.398 [2024-11-20 11:53:05.985289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.398 [2024-11-20 11:53:05.985367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:00.398 [2024-11-20 11:53:05.985418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:00.398 [2024-11-20 11:53:05.985431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:00.398 [2024-11-20 11:53:05.985445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.398 [2024-11-20 11:53:05.985521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:00.398 [2024-11-20 11:53:05.985540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:00.398 [2024-11-20 11:53:05.985572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:00.398 [2024-11-20 11:53:05.985587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.398 [2024-11-20 11:53:05.985777] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 626.503 ms, result 0 00:39:02.304 00:39:02.304 00:39:02.304 11:53:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:39:04.837 11:53:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:39:04.837 [2024-11-20 11:53:10.150550] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:39:04.837 [2024-11-20 11:53:10.150760] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82945 ] 00:39:04.837 [2024-11-20 11:53:10.334517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:04.837 [2024-11-20 11:53:10.515901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:05.407 [2024-11-20 11:53:10.941261] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:05.407 [2024-11-20 11:53:10.941415] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:05.407 [2024-11-20 11:53:11.114187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:05.407 [2024-11-20 11:53:11.114259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:39:05.407 [2024-11-20 11:53:11.114294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:39:05.407 [2024-11-20 11:53:11.114308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:05.407 [2024-11-20 11:53:11.114406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:05.407 [2024-11-20 11:53:11.114457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:05.407 [2024-11-20 11:53:11.114477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:39:05.407 [2024-11-20 11:53:11.114489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:05.407 [2024-11-20 11:53:11.114523] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:39:05.407 [2024-11-20 11:53:11.115484] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:39:05.407 [2024-11-20 11:53:11.115525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:05.407 [2024-11-20 11:53:11.115574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:05.407 [2024-11-20 11:53:11.115590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.011 ms 00:39:05.407 [2024-11-20 11:53:11.115601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:05.407 [2024-11-20 11:53:11.118667] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:39:05.407 [2024-11-20 11:53:11.137490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:05.407 [2024-11-20 11:53:11.137552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:39:05.407 [2024-11-20 11:53:11.137573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.825 ms 00:39:05.407 [2024-11-20 11:53:11.137586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:05.407 [2024-11-20 11:53:11.137669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:05.407 [2024-11-20 11:53:11.137691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:39:05.407 [2024-11-20 11:53:11.137712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:39:05.407 [2024-11-20 11:53:11.137725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:05.407 [2024-11-20 11:53:11.150906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:05.407 [2024-11-20 11:53:11.150958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:05.407 [2024-11-20 11:53:11.150975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.040 ms 00:39:05.407 [2024-11-20 11:53:11.150987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:05.407 [2024-11-20 11:53:11.151173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:05.407 [2024-11-20 11:53:11.151195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:05.407 [2024-11-20 11:53:11.151209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:39:05.407 [2024-11-20 11:53:11.151223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:05.407 [2024-11-20 11:53:11.151324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:05.407 [2024-11-20 11:53:11.151345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:39:05.407 [2024-11-20 11:53:11.151365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:39:05.407 [2024-11-20 11:53:11.151378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:05.407 [2024-11-20 11:53:11.151428] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:39:05.407 [2024-11-20 11:53:11.157523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:05.407 [2024-11-20 11:53:11.157579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:05.407 [2024-11-20 11:53:11.157597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.110 ms 00:39:05.407 [2024-11-20 11:53:11.157616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:05.407 [2024-11-20 11:53:11.157666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:05.407 [2024-11-20 11:53:11.157685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:39:05.407 [2024-11-20 11:53:11.157700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:39:05.407 [2024-11-20 11:53:11.157712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:05.407 [2024-11-20 11:53:11.157785] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:39:05.407 [2024-11-20 11:53:11.157833] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:39:05.407 [2024-11-20 11:53:11.157887] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:39:05.407 [2024-11-20 11:53:11.157913] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:39:05.407 [2024-11-20 11:53:11.158070] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:39:05.407 [2024-11-20 11:53:11.158097] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:39:05.407 [2024-11-20 11:53:11.158114] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:39:05.407 [2024-11-20 11:53:11.158131] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:39:05.407 [2024-11-20 11:53:11.158147] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:39:05.407 [2024-11-20 11:53:11.158166] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:39:05.407 [2024-11-20 11:53:11.158179] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:39:05.407 [2024-11-20 11:53:11.158191] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:39:05.407 [2024-11-20 11:53:11.158204] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:39:05.407 [2024-11-20 11:53:11.158224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:05.407 [2024-11-20 11:53:11.158236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:39:05.407 [2024-11-20 11:53:11.158249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.466 ms 00:39:05.407 [2024-11-20 11:53:11.158262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:05.407 [2024-11-20 11:53:11.158363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:05.407 [2024-11-20 11:53:11.158381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:39:05.407 [2024-11-20 11:53:11.158395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:39:05.407 [2024-11-20 11:53:11.158407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:05.407 [2024-11-20 11:53:11.158553] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:39:05.407 [2024-11-20 11:53:11.158642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:39:05.407 [2024-11-20 11:53:11.158667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:05.407 [2024-11-20 11:53:11.158680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:05.408 [2024-11-20 11:53:11.158693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:39:05.408 [2024-11-20 11:53:11.158704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:39:05.408 [2024-11-20 11:53:11.158716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:39:05.408 [2024-11-20 11:53:11.158728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:39:05.408 [2024-11-20 11:53:11.158740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:39:05.408 [2024-11-20 11:53:11.158759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:05.408 [2024-11-20 11:53:11.158771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:39:05.408 [2024-11-20 11:53:11.158783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:39:05.408 [2024-11-20 11:53:11.158794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:05.408 [2024-11-20 11:53:11.158804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:39:05.408 [2024-11-20 11:53:11.158815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:39:05.408 [2024-11-20 11:53:11.158842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:05.408 [2024-11-20 11:53:11.158855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:39:05.408 [2024-11-20 11:53:11.158870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:39:05.408 [2024-11-20 11:53:11.158882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:05.408 [2024-11-20 11:53:11.158893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:39:05.408 [2024-11-20 11:53:11.158904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:39:05.408 [2024-11-20 11:53:11.158915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:05.408 [2024-11-20 11:53:11.158927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:39:05.408 [2024-11-20 11:53:11.158938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:39:05.408 [2024-11-20 11:53:11.158964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:05.408 [2024-11-20 11:53:11.158990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:39:05.408 [2024-11-20 11:53:11.159000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:39:05.408 [2024-11-20 11:53:11.159010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:05.408 [2024-11-20 11:53:11.159020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:39:05.408 [2024-11-20 11:53:11.159030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:39:05.408 [2024-11-20 11:53:11.159040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:05.408 [2024-11-20 11:53:11.159050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:39:05.408 [2024-11-20 11:53:11.159076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:39:05.408 [2024-11-20 11:53:11.159103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:05.408 [2024-11-20 11:53:11.159115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:39:05.408 [2024-11-20 11:53:11.159126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:39:05.408 [2024-11-20 11:53:11.159137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:05.408 [2024-11-20 11:53:11.159148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:39:05.408 [2024-11-20 11:53:11.159159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:39:05.408 [2024-11-20 11:53:11.159176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:05.408 [2024-11-20 11:53:11.159187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:39:05.408 [2024-11-20 11:53:11.159198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:39:05.408 [2024-11-20 11:53:11.159210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:05.408 [2024-11-20 11:53:11.159220] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:39:05.408 [2024-11-20 11:53:11.159233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:39:05.408 [2024-11-20 11:53:11.159245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:05.408 [2024-11-20 11:53:11.159257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:05.408 [2024-11-20 11:53:11.159269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:39:05.408 [2024-11-20 11:53:11.159289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:39:05.408 [2024-11-20 11:53:11.159303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:39:05.408 [2024-11-20 11:53:11.159315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:39:05.408 [2024-11-20 11:53:11.159327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:39:05.408 [2024-11-20 11:53:11.159338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:39:05.408 [2024-11-20 11:53:11.159351] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:39:05.408 [2024-11-20 11:53:11.159367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:05.408 [2024-11-20 11:53:11.159381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:39:05.408 [2024-11-20 11:53:11.159394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:39:05.408 [2024-11-20 11:53:11.159406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:39:05.408 [2024-11-20 11:53:11.159418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:39:05.408 [2024-11-20 11:53:11.159430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:39:05.408 [2024-11-20 11:53:11.159441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:39:05.408 [2024-11-20 11:53:11.159453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:39:05.408 [2024-11-20 11:53:11.159465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:39:05.408 [2024-11-20 11:53:11.159477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:39:05.408 [2024-11-20 11:53:11.159505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:39:05.408 [2024-11-20 11:53:11.159530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:39:05.408 [2024-11-20 11:53:11.159541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:39:05.408 [2024-11-20 11:53:11.159558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:39:05.408 [2024-11-20 11:53:11.159569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:39:05.408 [2024-11-20 11:53:11.159588] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:39:05.408 [2024-11-20 11:53:11.159624] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:05.408 [2024-11-20 11:53:11.159651] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:39:05.408 [2024-11-20 11:53:11.159681] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:39:05.408 [2024-11-20 11:53:11.159693] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:39:05.408 [2024-11-20 11:53:11.159706] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:39:05.408 [2024-11-20 11:53:11.159721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:05.408 [2024-11-20 11:53:11.159734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:39:05.408 [2024-11-20 11:53:11.159747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.230 ms 00:39:05.408 [2024-11-20 11:53:11.159759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:05.669 [2024-11-20 11:53:11.211372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:05.669 [2024-11-20 11:53:11.211462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:05.669 [2024-11-20 11:53:11.211485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.536 ms 00:39:05.669 [2024-11-20 11:53:11.211497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:05.669 [2024-11-20 11:53:11.211644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:05.669 [2024-11-20 11:53:11.211664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:39:05.669 [2024-11-20 11:53:11.211679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:39:05.669 [2024-11-20 11:53:11.211690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:05.669 [2024-11-20 11:53:11.273823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:05.669 [2024-11-20 11:53:11.273925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:05.669 [2024-11-20 11:53:11.273945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.965 ms 00:39:05.669 [2024-11-20 11:53:11.273958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:05.669 [2024-11-20 11:53:11.274046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:05.669 [2024-11-20 11:53:11.274065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:05.669 [2024-11-20 11:53:11.274088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:39:05.669 [2024-11-20 11:53:11.274101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:05.669 [2024-11-20 11:53:11.275134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:05.669 [2024-11-20 11:53:11.275177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:05.669 [2024-11-20 11:53:11.275195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.919 ms 00:39:05.669 [2024-11-20 11:53:11.275208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:05.669 [2024-11-20 11:53:11.275402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:05.669 [2024-11-20 11:53:11.275440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:05.669 [2024-11-20 11:53:11.275454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:39:05.669 [2024-11-20 11:53:11.275475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:05.669 [2024-11-20 11:53:11.299960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:05.669 [2024-11-20 11:53:11.300017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:05.669 [2024-11-20 11:53:11.300042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.427 ms 00:39:05.669 [2024-11-20 11:53:11.300055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:05.669 [2024-11-20 11:53:11.319606] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:39:05.669 [2024-11-20 11:53:11.319700] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:39:05.669 [2024-11-20 11:53:11.319720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:05.669 [2024-11-20 11:53:11.319731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:39:05.669 [2024-11-20 11:53:11.319744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.457 ms 00:39:05.669 [2024-11-20 11:53:11.319755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:05.669 [2024-11-20 11:53:11.352960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:05.669 [2024-11-20 11:53:11.353009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:39:05.669 [2024-11-20 11:53:11.353026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.094 ms 00:39:05.669 [2024-11-20 11:53:11.353038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:05.669 [2024-11-20 11:53:11.370042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:05.669 [2024-11-20 11:53:11.370107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:39:05.669 [2024-11-20 11:53:11.370126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.918 ms 00:39:05.669 [2024-11-20 11:53:11.370138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:05.669 [2024-11-20 11:53:11.386280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:05.669 [2024-11-20 11:53:11.386318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:39:05.669 [2024-11-20 11:53:11.386334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.086 ms 00:39:05.669 [2024-11-20 11:53:11.386345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:05.669 [2024-11-20 11:53:11.387249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:05.669 [2024-11-20 11:53:11.387285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:39:05.669 [2024-11-20 11:53:11.387301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.769 ms 00:39:05.669 [2024-11-20 11:53:11.387320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:05.928 [2024-11-20 11:53:11.479867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:05.928 [2024-11-20 11:53:11.479962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:39:05.928 [2024-11-20 11:53:11.480009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.518 ms 00:39:05.928 [2024-11-20 11:53:11.480022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:05.928 [2024-11-20 11:53:11.493473] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:39:05.928 [2024-11-20 11:53:11.497074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:05.928 [2024-11-20 11:53:11.497108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:39:05.928 [2024-11-20 11:53:11.497125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.966 ms 00:39:05.928 [2024-11-20 11:53:11.497147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:05.929 [2024-11-20 11:53:11.497288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:05.929 [2024-11-20 11:53:11.497310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:39:05.929 [2024-11-20 11:53:11.497325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:39:05.929 [2024-11-20 11:53:11.497344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:05.929 [2024-11-20 11:53:11.500084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:05.929 [2024-11-20 11:53:11.500130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:39:05.929 [2024-11-20 11:53:11.500154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.656 ms 00:39:05.929 [2024-11-20 11:53:11.500167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:05.929 [2024-11-20 11:53:11.500208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:05.929 [2024-11-20 11:53:11.500236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:39:05.929 [2024-11-20 11:53:11.500249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:39:05.929 [2024-11-20 11:53:11.500262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:05.929 [2024-11-20 11:53:11.500316] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:39:05.929 [2024-11-20 11:53:11.500343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:05.929 [2024-11-20 11:53:11.500355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:39:05.929 [2024-11-20 11:53:11.500369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:39:05.929 [2024-11-20 11:53:11.500396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:05.929 [2024-11-20 11:53:11.535405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:05.929 [2024-11-20 11:53:11.535457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:39:05.929 [2024-11-20 11:53:11.535476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.965 ms 00:39:05.929 [2024-11-20 11:53:11.535499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:05.929 [2024-11-20 11:53:11.535609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:05.929 [2024-11-20 11:53:11.535630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:39:05.929 [2024-11-20 11:53:11.535649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:39:05.929 [2024-11-20 11:53:11.535662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:05.929 [2024-11-20 11:53:11.542741] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 425.476 ms, result 0 00:39:07.306  [2024-11-20T11:53:14.007Z] Copying: 932/1048576 [kB] (932 kBps) [2024-11-20T11:53:14.940Z] Copying: 4948/1048576 [kB] (4016 kBps) [2024-11-20T11:53:15.908Z] Copying: 26/1024 [MB] (21 MBps) [2024-11-20T11:53:16.850Z] Copying: 53/1024 [MB] (27 MBps) [2024-11-20T11:53:17.786Z] Copying: 77/1024 [MB] (24 MBps) [2024-11-20T11:53:19.163Z] Copying: 102/1024 [MB] (24 MBps) [2024-11-20T11:53:20.099Z] Copying: 128/1024 [MB] (25 MBps) [2024-11-20T11:53:21.034Z] Copying: 153/1024 [MB] (25 MBps) [2024-11-20T11:53:21.969Z] Copying: 179/1024 [MB] (26 MBps) [2024-11-20T11:53:22.905Z] Copying: 206/1024 [MB] (26 MBps) [2024-11-20T11:53:23.842Z] Copying: 232/1024 [MB] (26 MBps) [2024-11-20T11:53:25.219Z] Copying: 259/1024 [MB] (26 MBps) [2024-11-20T11:53:25.788Z] Copying: 286/1024 [MB] (26 MBps) [2024-11-20T11:53:27.163Z] Copying: 312/1024 [MB] (26 MBps) [2024-11-20T11:53:28.101Z] Copying: 338/1024 [MB] (25 MBps) [2024-11-20T11:53:29.153Z] Copying: 364/1024 [MB] (26 MBps) [2024-11-20T11:53:30.089Z] Copying: 390/1024 [MB] (26 MBps) [2024-11-20T11:53:31.025Z] Copying: 416/1024 [MB] (25 MBps) [2024-11-20T11:53:31.963Z] Copying: 441/1024 [MB] (24 MBps) [2024-11-20T11:53:32.900Z] Copying: 466/1024 [MB] (25 MBps) [2024-11-20T11:53:33.837Z] Copying: 491/1024 [MB] (25 MBps) [2024-11-20T11:53:35.215Z] Copying: 517/1024 [MB] (25 MBps) [2024-11-20T11:53:35.782Z] Copying: 542/1024 [MB] (25 MBps) [2024-11-20T11:53:37.162Z] Copying: 568/1024 [MB] (25 MBps) [2024-11-20T11:53:38.100Z] Copying: 593/1024 [MB] (25 MBps) [2024-11-20T11:53:39.037Z] Copying: 618/1024 [MB] (25 MBps) [2024-11-20T11:53:39.981Z] Copying: 644/1024 [MB] (25 MBps) [2024-11-20T11:53:40.917Z] Copying: 669/1024 [MB] (25 MBps) [2024-11-20T11:53:41.854Z] Copying: 695/1024 [MB] (25 MBps) [2024-11-20T11:53:42.790Z] Copying: 721/1024 [MB] (25 MBps) [2024-11-20T11:53:44.205Z] Copying: 746/1024 [MB] (25 MBps) [2024-11-20T11:53:45.141Z] Copying: 772/1024 [MB] (25 MBps) [2024-11-20T11:53:46.076Z] Copying: 798/1024 [MB] (25 MBps) [2024-11-20T11:53:47.012Z] Copying: 823/1024 [MB] (25 MBps) [2024-11-20T11:53:47.948Z] Copying: 849/1024 [MB] (25 MBps) [2024-11-20T11:53:48.885Z] Copying: 875/1024 [MB] (25 MBps) [2024-11-20T11:53:49.828Z] Copying: 900/1024 [MB] (25 MBps) [2024-11-20T11:53:51.206Z] Copying: 926/1024 [MB] (25 MBps) [2024-11-20T11:53:52.143Z] Copying: 952/1024 [MB] (26 MBps) [2024-11-20T11:53:53.080Z] Copying: 977/1024 [MB] (25 MBps) [2024-11-20T11:53:53.649Z] Copying: 1003/1024 [MB] (25 MBps) [2024-11-20T11:53:54.218Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-20 11:53:54.044107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:48.452 [2024-11-20 11:53:54.044205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:39:48.452 [2024-11-20 11:53:54.044230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:39:48.452 [2024-11-20 11:53:54.044246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.452 [2024-11-20 11:53:54.044286] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:39:48.452 [2024-11-20 11:53:54.049607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:48.452 [2024-11-20 11:53:54.049655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:39:48.452 [2024-11-20 11:53:54.049674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.292 ms 00:39:48.452 [2024-11-20 11:53:54.049699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.452 [2024-11-20 11:53:54.050053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:48.452 [2024-11-20 11:53:54.050089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:39:48.452 [2024-11-20 11:53:54.050113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:39:48.452 [2024-11-20 11:53:54.050139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.452 [2024-11-20 11:53:54.063169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:48.452 [2024-11-20 11:53:54.063228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:39:48.452 [2024-11-20 11:53:54.063250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.003 ms 00:39:48.452 [2024-11-20 11:53:54.063266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.452 [2024-11-20 11:53:54.071669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:48.452 [2024-11-20 11:53:54.071718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:39:48.452 [2024-11-20 11:53:54.071736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.355 ms 00:39:48.452 [2024-11-20 11:53:54.071768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.452 [2024-11-20 11:53:54.111632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:48.452 [2024-11-20 11:53:54.111717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:39:48.452 [2024-11-20 11:53:54.111753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.781 ms 00:39:48.452 [2024-11-20 11:53:54.111768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.452 [2024-11-20 11:53:54.133231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:48.453 [2024-11-20 11:53:54.133292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:39:48.453 [2024-11-20 11:53:54.133318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.409 ms 00:39:48.453 [2024-11-20 11:53:54.133343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.453 [2024-11-20 11:53:54.135315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:48.453 [2024-11-20 11:53:54.135391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:39:48.453 [2024-11-20 11:53:54.135410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.936 ms 00:39:48.453 [2024-11-20 11:53:54.135425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.453 [2024-11-20 11:53:54.174292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:48.453 [2024-11-20 11:53:54.174342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:39:48.453 [2024-11-20 11:53:54.174375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.831 ms 00:39:48.453 [2024-11-20 11:53:54.174389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.453 [2024-11-20 11:53:54.212648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:48.453 [2024-11-20 11:53:54.212729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:39:48.453 [2024-11-20 11:53:54.212779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.204 ms 00:39:48.453 [2024-11-20 11:53:54.212794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.713 [2024-11-20 11:53:54.247364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:48.713 [2024-11-20 11:53:54.247420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:39:48.713 [2024-11-20 11:53:54.247458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.513 ms 00:39:48.713 [2024-11-20 11:53:54.247469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.713 [2024-11-20 11:53:54.277470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:48.713 [2024-11-20 11:53:54.277512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:39:48.713 [2024-11-20 11:53:54.277545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.889 ms 00:39:48.713 [2024-11-20 11:53:54.277566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.713 [2024-11-20 11:53:54.277623] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:39:48.713 [2024-11-20 11:53:54.277649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:39:48.713 [2024-11-20 11:53:54.277664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:39:48.713 [2024-11-20 11:53:54.277677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.277688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.277711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.277722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.277732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.277743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.277754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.277765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.277792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.277803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.277814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.277825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.277836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.277847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.277858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.277869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.277880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.277891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.277902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.277912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.277924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.277934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.277946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.277968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.277979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.277991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:39:48.713 [2024-11-20 11:53:54.278454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:39:48.714 [2024-11-20 11:53:54.278466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:39:48.714 [2024-11-20 11:53:54.278477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:39:48.714 [2024-11-20 11:53:54.278488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:39:48.714 [2024-11-20 11:53:54.278514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:39:48.714 [2024-11-20 11:53:54.278526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:39:48.714 [2024-11-20 11:53:54.278537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:39:48.714 [2024-11-20 11:53:54.278549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:39:48.714 [2024-11-20 11:53:54.278560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:39:48.714 [2024-11-20 11:53:54.278572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:39:48.714 [2024-11-20 11:53:54.278583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:39:48.714 [2024-11-20 11:53:54.278611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:39:48.714 [2024-11-20 11:53:54.278652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:39:48.714 [2024-11-20 11:53:54.278665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:39:48.714 [2024-11-20 11:53:54.278676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:39:48.714 [2024-11-20 11:53:54.278688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:39:48.714 [2024-11-20 11:53:54.278699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:39:48.714 [2024-11-20 11:53:54.278710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:39:48.714 [2024-11-20 11:53:54.278722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:39:48.714 [2024-11-20 11:53:54.278734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:39:48.714 [2024-11-20 11:53:54.278745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:39:48.714 [2024-11-20 11:53:54.278758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:39:48.714 [2024-11-20 11:53:54.278770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:39:48.714 [2024-11-20 11:53:54.278782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:39:48.714 [2024-11-20 11:53:54.278793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:39:48.714 [2024-11-20 11:53:54.278805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:39:48.714 [2024-11-20 11:53:54.278816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:39:48.714 [2024-11-20 11:53:54.278828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:39:48.714 [2024-11-20 11:53:54.278839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:39:48.714 [2024-11-20 11:53:54.278851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:39:48.714 [2024-11-20 11:53:54.278862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:39:48.714 [2024-11-20 11:53:54.278873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:39:48.714 [2024-11-20 11:53:54.278885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:39:48.714 [2024-11-20 11:53:54.278906] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:39:48.714 [2024-11-20 11:53:54.278917] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4c8e7232-b3bc-4892-b477-51e48ee0263e 00:39:48.714 [2024-11-20 11:53:54.278929] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:39:48.714 [2024-11-20 11:53:54.278939] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 135360 00:39:48.714 [2024-11-20 11:53:54.278965] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 133376 00:39:48.714 [2024-11-20 11:53:54.278983] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0149 00:39:48.714 [2024-11-20 11:53:54.279009] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:39:48.714 [2024-11-20 11:53:54.279035] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:39:48.714 [2024-11-20 11:53:54.279046] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:39:48.714 [2024-11-20 11:53:54.279067] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:39:48.714 [2024-11-20 11:53:54.279076] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:39:48.714 [2024-11-20 11:53:54.279088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:48.714 [2024-11-20 11:53:54.279114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:39:48.714 [2024-11-20 11:53:54.279126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.466 ms 00:39:48.714 [2024-11-20 11:53:54.279137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.714 [2024-11-20 11:53:54.296669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:48.714 [2024-11-20 11:53:54.296721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:39:48.714 [2024-11-20 11:53:54.296752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.490 ms 00:39:48.714 [2024-11-20 11:53:54.296763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.714 [2024-11-20 11:53:54.297229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:48.714 [2024-11-20 11:53:54.297256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:39:48.714 [2024-11-20 11:53:54.297270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:39:48.714 [2024-11-20 11:53:54.297280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.714 [2024-11-20 11:53:54.344106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:48.714 [2024-11-20 11:53:54.344194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:48.714 [2024-11-20 11:53:54.344226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:48.714 [2024-11-20 11:53:54.344236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.714 [2024-11-20 11:53:54.344314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:48.714 [2024-11-20 11:53:54.344345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:48.714 [2024-11-20 11:53:54.344357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:48.714 [2024-11-20 11:53:54.344369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.714 [2024-11-20 11:53:54.344448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:48.714 [2024-11-20 11:53:54.344476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:48.714 [2024-11-20 11:53:54.344488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:48.714 [2024-11-20 11:53:54.344500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.714 [2024-11-20 11:53:54.344523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:48.714 [2024-11-20 11:53:54.344538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:48.714 [2024-11-20 11:53:54.344549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:48.714 [2024-11-20 11:53:54.344560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.714 [2024-11-20 11:53:54.453352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:48.714 [2024-11-20 11:53:54.453468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:48.714 [2024-11-20 11:53:54.453502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:48.714 [2024-11-20 11:53:54.453513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.973 [2024-11-20 11:53:54.540419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:48.973 [2024-11-20 11:53:54.540492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:48.973 [2024-11-20 11:53:54.540525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:48.973 [2024-11-20 11:53:54.540537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.973 [2024-11-20 11:53:54.540673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:48.973 [2024-11-20 11:53:54.540690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:48.973 [2024-11-20 11:53:54.540710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:48.973 [2024-11-20 11:53:54.540721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.973 [2024-11-20 11:53:54.540787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:48.973 [2024-11-20 11:53:54.540803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:48.973 [2024-11-20 11:53:54.540830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:48.973 [2024-11-20 11:53:54.540842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.973 [2024-11-20 11:53:54.540959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:48.973 [2024-11-20 11:53:54.540978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:48.973 [2024-11-20 11:53:54.540990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:48.973 [2024-11-20 11:53:54.541007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.973 [2024-11-20 11:53:54.541055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:48.973 [2024-11-20 11:53:54.541073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:39:48.973 [2024-11-20 11:53:54.541086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:48.973 [2024-11-20 11:53:54.541108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.973 [2024-11-20 11:53:54.541153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:48.973 [2024-11-20 11:53:54.541168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:48.973 [2024-11-20 11:53:54.541180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:48.973 [2024-11-20 11:53:54.541197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.973 [2024-11-20 11:53:54.541253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:48.973 [2024-11-20 11:53:54.541269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:48.974 [2024-11-20 11:53:54.541281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:48.974 [2024-11-20 11:53:54.541292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.974 [2024-11-20 11:53:54.541490] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 497.309 ms, result 0 00:39:49.911 00:39:49.911 00:39:49.911 11:53:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:39:52.447 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:39:52.447 11:53:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:39:52.447 [2024-11-20 11:53:57.808059] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:39:52.447 [2024-11-20 11:53:57.808868] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83400 ] 00:39:52.447 [2024-11-20 11:53:57.999703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:52.447 [2024-11-20 11:53:58.143704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:53.015 [2024-11-20 11:53:58.505504] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:53.015 [2024-11-20 11:53:58.505656] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:53.015 [2024-11-20 11:53:58.672576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:53.015 [2024-11-20 11:53:58.672697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:39:53.015 [2024-11-20 11:53:58.672743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:39:53.015 [2024-11-20 11:53:58.672769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:53.015 [2024-11-20 11:53:58.672870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:53.015 [2024-11-20 11:53:58.672890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:53.015 [2024-11-20 11:53:58.672909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:39:53.015 [2024-11-20 11:53:58.672920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:53.015 [2024-11-20 11:53:58.672950] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:39:53.015 [2024-11-20 11:53:58.674033] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:39:53.015 [2024-11-20 11:53:58.674231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:53.015 [2024-11-20 11:53:58.674251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:53.015 [2024-11-20 11:53:58.674265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.287 ms 00:39:53.015 [2024-11-20 11:53:58.674276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:53.015 [2024-11-20 11:53:58.676497] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:39:53.015 [2024-11-20 11:53:58.694331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:53.015 [2024-11-20 11:53:58.694534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:39:53.015 [2024-11-20 11:53:58.694573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.835 ms 00:39:53.015 [2024-11-20 11:53:58.694587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:53.015 [2024-11-20 11:53:58.694664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:53.015 [2024-11-20 11:53:58.694682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:39:53.015 [2024-11-20 11:53:58.694695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:39:53.015 [2024-11-20 11:53:58.694706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:53.015 [2024-11-20 11:53:58.704719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:53.015 [2024-11-20 11:53:58.704776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:53.015 [2024-11-20 11:53:58.704821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.910 ms 00:39:53.015 [2024-11-20 11:53:58.704832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:53.015 [2024-11-20 11:53:58.704927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:53.015 [2024-11-20 11:53:58.704944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:53.016 [2024-11-20 11:53:58.704971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:39:53.016 [2024-11-20 11:53:58.704982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:53.016 [2024-11-20 11:53:58.705070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:53.016 [2024-11-20 11:53:58.705088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:39:53.016 [2024-11-20 11:53:58.705100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:39:53.016 [2024-11-20 11:53:58.705110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:53.016 [2024-11-20 11:53:58.705154] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:39:53.016 [2024-11-20 11:53:58.710494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:53.016 [2024-11-20 11:53:58.710751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:53.016 [2024-11-20 11:53:58.710793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.360 ms 00:39:53.016 [2024-11-20 11:53:58.710814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:53.016 [2024-11-20 11:53:58.710855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:53.016 [2024-11-20 11:53:58.710869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:39:53.016 [2024-11-20 11:53:58.710883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:39:53.016 [2024-11-20 11:53:58.710893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:53.016 [2024-11-20 11:53:58.710938] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:39:53.016 [2024-11-20 11:53:58.710996] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:39:53.016 [2024-11-20 11:53:58.711087] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:39:53.016 [2024-11-20 11:53:58.711245] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:39:53.016 [2024-11-20 11:53:58.711376] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:39:53.016 [2024-11-20 11:53:58.711392] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:39:53.016 [2024-11-20 11:53:58.711406] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:39:53.016 [2024-11-20 11:53:58.711421] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:39:53.016 [2024-11-20 11:53:58.711449] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:39:53.016 [2024-11-20 11:53:58.711461] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:39:53.016 [2024-11-20 11:53:58.711471] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:39:53.016 [2024-11-20 11:53:58.711495] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:39:53.016 [2024-11-20 11:53:58.711505] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:39:53.016 [2024-11-20 11:53:58.711523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:53.016 [2024-11-20 11:53:58.711533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:39:53.016 [2024-11-20 11:53:58.711545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.587 ms 00:39:53.016 [2024-11-20 11:53:58.711555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:53.016 [2024-11-20 11:53:58.711661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:53.016 [2024-11-20 11:53:58.711678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:39:53.016 [2024-11-20 11:53:58.711689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:39:53.016 [2024-11-20 11:53:58.711700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:53.016 [2024-11-20 11:53:58.711810] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:39:53.016 [2024-11-20 11:53:58.711836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:39:53.016 [2024-11-20 11:53:58.711848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:53.016 [2024-11-20 11:53:58.711859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:53.016 [2024-11-20 11:53:58.711870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:39:53.016 [2024-11-20 11:53:58.711880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:39:53.016 [2024-11-20 11:53:58.711890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:39:53.016 [2024-11-20 11:53:58.711900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:39:53.016 [2024-11-20 11:53:58.711911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:39:53.016 [2024-11-20 11:53:58.711920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:53.016 [2024-11-20 11:53:58.711930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:39:53.016 [2024-11-20 11:53:58.711941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:39:53.016 [2024-11-20 11:53:58.711951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:53.016 [2024-11-20 11:53:58.711961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:39:53.016 [2024-11-20 11:53:58.711971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:39:53.016 [2024-11-20 11:53:58.711992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:53.016 [2024-11-20 11:53:58.712003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:39:53.016 [2024-11-20 11:53:58.712013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:39:53.016 [2024-11-20 11:53:58.712034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:53.016 [2024-11-20 11:53:58.712044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:39:53.016 [2024-11-20 11:53:58.712053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:39:53.016 [2024-11-20 11:53:58.712062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:53.016 [2024-11-20 11:53:58.712087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:39:53.016 [2024-11-20 11:53:58.712097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:39:53.016 [2024-11-20 11:53:58.712107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:53.016 [2024-11-20 11:53:58.712117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:39:53.016 [2024-11-20 11:53:58.712126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:39:53.016 [2024-11-20 11:53:58.712135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:53.016 [2024-11-20 11:53:58.712145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:39:53.016 [2024-11-20 11:53:58.712154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:39:53.016 [2024-11-20 11:53:58.712163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:53.016 [2024-11-20 11:53:58.712173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:39:53.016 [2024-11-20 11:53:58.712183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:39:53.016 [2024-11-20 11:53:58.712192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:53.016 [2024-11-20 11:53:58.712202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:39:53.016 [2024-11-20 11:53:58.712211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:39:53.016 [2024-11-20 11:53:58.712220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:53.016 [2024-11-20 11:53:58.712229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:39:53.016 [2024-11-20 11:53:58.712239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:39:53.016 [2024-11-20 11:53:58.712248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:53.016 [2024-11-20 11:53:58.712258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:39:53.016 [2024-11-20 11:53:58.712267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:39:53.016 [2024-11-20 11:53:58.712277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:53.016 [2024-11-20 11:53:58.712288] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:39:53.016 [2024-11-20 11:53:58.712299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:39:53.016 [2024-11-20 11:53:58.712311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:53.016 [2024-11-20 11:53:58.712332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:53.016 [2024-11-20 11:53:58.712348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:39:53.016 [2024-11-20 11:53:58.712359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:39:53.016 [2024-11-20 11:53:58.712369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:39:53.016 [2024-11-20 11:53:58.712380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:39:53.016 [2024-11-20 11:53:58.712390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:39:53.016 [2024-11-20 11:53:58.712400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:39:53.016 [2024-11-20 11:53:58.712412] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:39:53.016 [2024-11-20 11:53:58.712426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:53.016 [2024-11-20 11:53:58.712438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:39:53.016 [2024-11-20 11:53:58.712449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:39:53.016 [2024-11-20 11:53:58.712460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:39:53.016 [2024-11-20 11:53:58.712471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:39:53.016 [2024-11-20 11:53:58.712482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:39:53.016 [2024-11-20 11:53:58.712493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:39:53.016 [2024-11-20 11:53:58.712518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:39:53.016 [2024-11-20 11:53:58.712529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:39:53.016 [2024-11-20 11:53:58.712540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:39:53.016 [2024-11-20 11:53:58.712550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:39:53.017 [2024-11-20 11:53:58.712561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:39:53.017 [2024-11-20 11:53:58.712572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:39:53.017 [2024-11-20 11:53:58.712628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:39:53.017 [2024-11-20 11:53:58.712641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:39:53.017 [2024-11-20 11:53:58.712667] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:39:53.017 [2024-11-20 11:53:58.712701] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:53.017 [2024-11-20 11:53:58.712714] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:39:53.017 [2024-11-20 11:53:58.712726] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:39:53.017 [2024-11-20 11:53:58.712752] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:39:53.017 [2024-11-20 11:53:58.712764] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:39:53.017 [2024-11-20 11:53:58.712793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:53.017 [2024-11-20 11:53:58.712805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:39:53.017 [2024-11-20 11:53:58.712817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.042 ms 00:39:53.017 [2024-11-20 11:53:58.712843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:53.017 [2024-11-20 11:53:58.753407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:53.017 [2024-11-20 11:53:58.753494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:53.017 [2024-11-20 11:53:58.753530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.492 ms 00:39:53.017 [2024-11-20 11:53:58.753542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:53.017 [2024-11-20 11:53:58.753726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:53.017 [2024-11-20 11:53:58.753742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:39:53.017 [2024-11-20 11:53:58.753755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:39:53.017 [2024-11-20 11:53:58.753783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:53.276 [2024-11-20 11:53:58.810144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:53.276 [2024-11-20 11:53:58.810400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:53.276 [2024-11-20 11:53:58.810430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.252 ms 00:39:53.276 [2024-11-20 11:53:58.810443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:53.276 [2024-11-20 11:53:58.810525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:53.276 [2024-11-20 11:53:58.810558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:53.276 [2024-11-20 11:53:58.810587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:39:53.276 [2024-11-20 11:53:58.810633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:53.276 [2024-11-20 11:53:58.811360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:53.276 [2024-11-20 11:53:58.811379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:53.276 [2024-11-20 11:53:58.811392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.617 ms 00:39:53.276 [2024-11-20 11:53:58.811403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:53.276 [2024-11-20 11:53:58.811570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:53.276 [2024-11-20 11:53:58.811590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:53.276 [2024-11-20 11:53:58.811602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:39:53.276 [2024-11-20 11:53:58.811666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:53.276 [2024-11-20 11:53:58.834342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:53.276 [2024-11-20 11:53:58.834399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:53.276 [2024-11-20 11:53:58.834437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.646 ms 00:39:53.276 [2024-11-20 11:53:58.834449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:53.276 [2024-11-20 11:53:58.852030] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:39:53.276 [2024-11-20 11:53:58.852247] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:39:53.276 [2024-11-20 11:53:58.852271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:53.276 [2024-11-20 11:53:58.852283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:39:53.276 [2024-11-20 11:53:58.852297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.671 ms 00:39:53.276 [2024-11-20 11:53:58.852308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:53.276 [2024-11-20 11:53:58.881155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:53.276 [2024-11-20 11:53:58.881206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:39:53.276 [2024-11-20 11:53:58.881238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.801 ms 00:39:53.276 [2024-11-20 11:53:58.881250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:53.276 [2024-11-20 11:53:58.896687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:53.276 [2024-11-20 11:53:58.896727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:39:53.276 [2024-11-20 11:53:58.896757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.392 ms 00:39:53.276 [2024-11-20 11:53:58.896768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:53.276 [2024-11-20 11:53:58.911922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:53.276 [2024-11-20 11:53:58.911961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:39:53.276 [2024-11-20 11:53:58.911992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.099 ms 00:39:53.276 [2024-11-20 11:53:58.912002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:53.276 [2024-11-20 11:53:58.912962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:53.276 [2024-11-20 11:53:58.913000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:39:53.276 [2024-11-20 11:53:58.913016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.818 ms 00:39:53.276 [2024-11-20 11:53:58.913034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:53.276 [2024-11-20 11:53:58.989467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:53.276 [2024-11-20 11:53:58.989587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:39:53.276 [2024-11-20 11:53:58.989632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.400 ms 00:39:53.276 [2024-11-20 11:53:58.989644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:53.276 [2024-11-20 11:53:59.001781] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:39:53.276 [2024-11-20 11:53:59.004555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:53.276 [2024-11-20 11:53:59.004616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:39:53.277 [2024-11-20 11:53:59.004651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.834 ms 00:39:53.277 [2024-11-20 11:53:59.004678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:53.277 [2024-11-20 11:53:59.004807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:53.277 [2024-11-20 11:53:59.004827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:39:53.277 [2024-11-20 11:53:59.004841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:39:53.277 [2024-11-20 11:53:59.004872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:53.277 [2024-11-20 11:53:59.006113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:53.277 [2024-11-20 11:53:59.006146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:39:53.277 [2024-11-20 11:53:59.006177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.173 ms 00:39:53.277 [2024-11-20 11:53:59.006188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:53.277 [2024-11-20 11:53:59.006222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:53.277 [2024-11-20 11:53:59.006251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:39:53.277 [2024-11-20 11:53:59.006264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:39:53.277 [2024-11-20 11:53:59.006274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:53.277 [2024-11-20 11:53:59.006331] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:39:53.277 [2024-11-20 11:53:59.006351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:53.277 [2024-11-20 11:53:59.006363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:39:53.277 [2024-11-20 11:53:59.006375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:39:53.277 [2024-11-20 11:53:59.006385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:53.277 [2024-11-20 11:53:59.037662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:53.277 [2024-11-20 11:53:59.037719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:39:53.277 [2024-11-20 11:53:59.037767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.249 ms 00:39:53.277 [2024-11-20 11:53:59.037786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:53.277 [2024-11-20 11:53:59.037886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:53.277 [2024-11-20 11:53:59.037919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:39:53.277 [2024-11-20 11:53:59.037947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:39:53.277 [2024-11-20 11:53:59.037973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:53.277 [2024-11-20 11:53:59.039558] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 366.345 ms, result 0 00:39:54.653  [2024-11-20T11:54:01.356Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-20T11:54:02.292Z] Copying: 44/1024 [MB] (21 MBps) [2024-11-20T11:54:03.668Z] Copying: 65/1024 [MB] (21 MBps) [2024-11-20T11:54:04.605Z] Copying: 86/1024 [MB] (21 MBps) [2024-11-20T11:54:05.541Z] Copying: 108/1024 [MB] (21 MBps) [2024-11-20T11:54:06.478Z] Copying: 130/1024 [MB] (21 MBps) [2024-11-20T11:54:07.414Z] Copying: 152/1024 [MB] (21 MBps) [2024-11-20T11:54:08.351Z] Copying: 173/1024 [MB] (21 MBps) [2024-11-20T11:54:09.297Z] Copying: 195/1024 [MB] (21 MBps) [2024-11-20T11:54:10.289Z] Copying: 216/1024 [MB] (21 MBps) [2024-11-20T11:54:11.665Z] Copying: 238/1024 [MB] (21 MBps) [2024-11-20T11:54:12.602Z] Copying: 259/1024 [MB] (21 MBps) [2024-11-20T11:54:13.538Z] Copying: 281/1024 [MB] (21 MBps) [2024-11-20T11:54:14.473Z] Copying: 303/1024 [MB] (21 MBps) [2024-11-20T11:54:15.410Z] Copying: 325/1024 [MB] (22 MBps) [2024-11-20T11:54:16.349Z] Copying: 347/1024 [MB] (21 MBps) [2024-11-20T11:54:17.287Z] Copying: 368/1024 [MB] (21 MBps) [2024-11-20T11:54:18.665Z] Copying: 390/1024 [MB] (21 MBps) [2024-11-20T11:54:19.601Z] Copying: 412/1024 [MB] (21 MBps) [2024-11-20T11:54:20.538Z] Copying: 433/1024 [MB] (21 MBps) [2024-11-20T11:54:21.476Z] Copying: 455/1024 [MB] (21 MBps) [2024-11-20T11:54:22.414Z] Copying: 477/1024 [MB] (21 MBps) [2024-11-20T11:54:23.351Z] Copying: 499/1024 [MB] (22 MBps) [2024-11-20T11:54:24.288Z] Copying: 521/1024 [MB] (22 MBps) [2024-11-20T11:54:25.663Z] Copying: 543/1024 [MB] (21 MBps) [2024-11-20T11:54:26.602Z] Copying: 564/1024 [MB] (21 MBps) [2024-11-20T11:54:27.561Z] Copying: 586/1024 [MB] (21 MBps) [2024-11-20T11:54:28.499Z] Copying: 607/1024 [MB] (21 MBps) [2024-11-20T11:54:29.435Z] Copying: 628/1024 [MB] (21 MBps) [2024-11-20T11:54:30.371Z] Copying: 650/1024 [MB] (21 MBps) [2024-11-20T11:54:31.308Z] Copying: 671/1024 [MB] (21 MBps) [2024-11-20T11:54:32.685Z] Copying: 693/1024 [MB] (21 MBps) [2024-11-20T11:54:33.252Z] Copying: 714/1024 [MB] (21 MBps) [2024-11-20T11:54:34.631Z] Copying: 735/1024 [MB] (20 MBps) [2024-11-20T11:54:35.567Z] Copying: 756/1024 [MB] (21 MBps) [2024-11-20T11:54:36.504Z] Copying: 777/1024 [MB] (20 MBps) [2024-11-20T11:54:37.454Z] Copying: 798/1024 [MB] (21 MBps) [2024-11-20T11:54:38.393Z] Copying: 819/1024 [MB] (20 MBps) [2024-11-20T11:54:39.330Z] Copying: 840/1024 [MB] (21 MBps) [2024-11-20T11:54:40.267Z] Copying: 862/1024 [MB] (21 MBps) [2024-11-20T11:54:41.645Z] Copying: 883/1024 [MB] (21 MBps) [2024-11-20T11:54:42.263Z] Copying: 904/1024 [MB] (21 MBps) [2024-11-20T11:54:43.641Z] Copying: 926/1024 [MB] (21 MBps) [2024-11-20T11:54:44.577Z] Copying: 948/1024 [MB] (21 MBps) [2024-11-20T11:54:45.514Z] Copying: 969/1024 [MB] (21 MBps) [2024-11-20T11:54:46.449Z] Copying: 991/1024 [MB] (21 MBps) [2024-11-20T11:54:47.015Z] Copying: 1012/1024 [MB] (21 MBps) [2024-11-20T11:54:47.584Z] Copying: 1024/1024 [MB] (average 21 MBps)[2024-11-20 11:54:47.330855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:41.818 [2024-11-20 11:54:47.330959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:40:41.818 [2024-11-20 11:54:47.330986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:40:41.818 [2024-11-20 11:54:47.330999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:41.818 [2024-11-20 11:54:47.331032] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:40:41.818 [2024-11-20 11:54:47.335018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:41.818 [2024-11-20 11:54:47.335387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:40:41.818 [2024-11-20 11:54:47.335431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.964 ms 00:40:41.818 [2024-11-20 11:54:47.335445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:41.818 [2024-11-20 11:54:47.336389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:41.818 [2024-11-20 11:54:47.336424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:40:41.818 [2024-11-20 11:54:47.336440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.910 ms 00:40:41.818 [2024-11-20 11:54:47.336466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:41.818 [2024-11-20 11:54:47.339637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:41.818 [2024-11-20 11:54:47.339688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:40:41.819 [2024-11-20 11:54:47.339704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.148 ms 00:40:41.819 [2024-11-20 11:54:47.339715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:41.819 [2024-11-20 11:54:47.345950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:41.819 [2024-11-20 11:54:47.345981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:40:41.819 [2024-11-20 11:54:47.345994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.191 ms 00:40:41.819 [2024-11-20 11:54:47.346004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:41.819 [2024-11-20 11:54:47.375779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:41.819 [2024-11-20 11:54:47.375828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:40:41.819 [2024-11-20 11:54:47.375849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.613 ms 00:40:41.819 [2024-11-20 11:54:47.375860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:41.819 [2024-11-20 11:54:47.392776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:41.819 [2024-11-20 11:54:47.392817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:40:41.819 [2024-11-20 11:54:47.392837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.865 ms 00:40:41.819 [2024-11-20 11:54:47.392848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:41.819 [2024-11-20 11:54:47.394805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:41.819 [2024-11-20 11:54:47.394868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:40:41.819 [2024-11-20 11:54:47.395464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.927 ms 00:40:41.819 [2024-11-20 11:54:47.395479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:41.819 [2024-11-20 11:54:47.421661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:41.819 [2024-11-20 11:54:47.421721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:40:41.819 [2024-11-20 11:54:47.421752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.150 ms 00:40:41.819 [2024-11-20 11:54:47.421763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:41.819 [2024-11-20 11:54:47.446466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:41.819 [2024-11-20 11:54:47.446518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:40:41.819 [2024-11-20 11:54:47.446547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.661 ms 00:40:41.819 [2024-11-20 11:54:47.446561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:41.819 [2024-11-20 11:54:47.470606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:41.819 [2024-11-20 11:54:47.470644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:40:41.819 [2024-11-20 11:54:47.470660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.005 ms 00:40:41.819 [2024-11-20 11:54:47.470670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:41.819 [2024-11-20 11:54:47.494679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:41.819 [2024-11-20 11:54:47.494717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:40:41.819 [2024-11-20 11:54:47.494733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.943 ms 00:40:41.819 [2024-11-20 11:54:47.494744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:41.819 [2024-11-20 11:54:47.494782] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:40:41.819 [2024-11-20 11:54:47.494805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:40:41.819 [2024-11-20 11:54:47.494828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:40:41.819 [2024-11-20 11:54:47.494839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.494851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.494861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.494872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.494882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.494892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.494902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.494913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.494923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.494934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.494944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.494955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.494965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.494977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.494987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.494998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.495008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.495018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.495028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.495039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.495049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.495059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.495070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.495081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.495093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.495103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.495114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.495125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.495136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.495149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.495160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.495170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.495180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.495191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.495201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.495212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.495223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.495233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.495244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.495254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.495265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:40:41.819 [2024-11-20 11:54:47.495275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:40:41.820 [2024-11-20 11:54:47.495933] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:40:41.820 [2024-11-20 11:54:47.495950] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4c8e7232-b3bc-4892-b477-51e48ee0263e 00:40:41.820 [2024-11-20 11:54:47.495963] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:40:41.820 [2024-11-20 11:54:47.495973] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:40:41.820 [2024-11-20 11:54:47.495983] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:40:41.820 [2024-11-20 11:54:47.495993] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:40:41.820 [2024-11-20 11:54:47.496002] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:40:41.820 [2024-11-20 11:54:47.496013] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:40:41.820 [2024-11-20 11:54:47.496035] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:40:41.820 [2024-11-20 11:54:47.496045] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:40:41.820 [2024-11-20 11:54:47.496054] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:40:41.820 [2024-11-20 11:54:47.496064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:41.820 [2024-11-20 11:54:47.496074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:40:41.820 [2024-11-20 11:54:47.496086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.284 ms 00:40:41.820 [2024-11-20 11:54:47.496096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:41.820 [2024-11-20 11:54:47.510524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:41.820 [2024-11-20 11:54:47.510572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:40:41.820 [2024-11-20 11:54:47.510589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.401 ms 00:40:41.820 [2024-11-20 11:54:47.510601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:41.820 [2024-11-20 11:54:47.511084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:41.821 [2024-11-20 11:54:47.511120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:40:41.821 [2024-11-20 11:54:47.511142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:40:41.821 [2024-11-20 11:54:47.511153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:41.821 [2024-11-20 11:54:47.550244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:41.821 [2024-11-20 11:54:47.550295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:41.821 [2024-11-20 11:54:47.550310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:41.821 [2024-11-20 11:54:47.550321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:41.821 [2024-11-20 11:54:47.550386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:41.821 [2024-11-20 11:54:47.550400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:41.821 [2024-11-20 11:54:47.550418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:41.821 [2024-11-20 11:54:47.550428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:41.821 [2024-11-20 11:54:47.550557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:41.821 [2024-11-20 11:54:47.550579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:41.821 [2024-11-20 11:54:47.550591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:41.821 [2024-11-20 11:54:47.550602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:41.821 [2024-11-20 11:54:47.550624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:41.821 [2024-11-20 11:54:47.550638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:41.821 [2024-11-20 11:54:47.550649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:41.821 [2024-11-20 11:54:47.550666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:42.080 [2024-11-20 11:54:47.644880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:42.080 [2024-11-20 11:54:47.645215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:42.080 [2024-11-20 11:54:47.645243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:42.080 [2024-11-20 11:54:47.645256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:42.080 [2024-11-20 11:54:47.719193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:42.080 [2024-11-20 11:54:47.719258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:42.080 [2024-11-20 11:54:47.719278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:42.080 [2024-11-20 11:54:47.719297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:42.080 [2024-11-20 11:54:47.719398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:42.080 [2024-11-20 11:54:47.719414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:42.080 [2024-11-20 11:54:47.719425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:42.080 [2024-11-20 11:54:47.719435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:42.080 [2024-11-20 11:54:47.719520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:42.080 [2024-11-20 11:54:47.719574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:42.080 [2024-11-20 11:54:47.719590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:42.080 [2024-11-20 11:54:47.719601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:42.080 [2024-11-20 11:54:47.719773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:42.080 [2024-11-20 11:54:47.719792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:42.080 [2024-11-20 11:54:47.719805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:42.080 [2024-11-20 11:54:47.719815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:42.080 [2024-11-20 11:54:47.719867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:42.080 [2024-11-20 11:54:47.719884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:40:42.080 [2024-11-20 11:54:47.719895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:42.080 [2024-11-20 11:54:47.719927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:42.080 [2024-11-20 11:54:47.719991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:42.080 [2024-11-20 11:54:47.720014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:42.080 [2024-11-20 11:54:47.720026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:42.080 [2024-11-20 11:54:47.720037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:42.080 [2024-11-20 11:54:47.720095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:42.080 [2024-11-20 11:54:47.720111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:42.080 [2024-11-20 11:54:47.720123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:42.080 [2024-11-20 11:54:47.720134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:42.080 [2024-11-20 11:54:47.720297] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 389.402 ms, result 0 00:40:43.014 00:40:43.014 00:40:43.014 11:54:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:40:44.913 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:40:44.913 11:54:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:40:44.913 11:54:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:40:44.913 11:54:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:40:44.913 11:54:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:40:45.171 11:54:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:40:45.429 11:54:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:40:45.429 11:54:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:40:45.429 Process with pid 81348 is not found 00:40:45.429 11:54:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81348 00:40:45.429 11:54:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81348 ']' 00:40:45.429 11:54:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81348 00:40:45.429 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81348) - No such process 00:40:45.429 11:54:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81348 is not found' 00:40:45.429 11:54:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:40:45.688 Remove shared memory files 00:40:45.688 11:54:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:40:45.688 11:54:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:40:45.688 11:54:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:40:45.688 11:54:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:40:45.688 11:54:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:40:45.689 11:54:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:40:45.689 11:54:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:40:45.689 ************************************ 00:40:45.689 END TEST ftl_dirty_shutdown 00:40:45.689 ************************************ 00:40:45.689 00:40:45.689 real 4m14.618s 00:40:45.689 user 4m52.257s 00:40:45.689 sys 0m40.903s 00:40:45.689 11:54:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:45.689 11:54:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:40:45.689 11:54:51 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:40:45.689 11:54:51 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:45.689 11:54:51 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:45.689 11:54:51 ftl -- common/autotest_common.sh@10 -- # set +x 00:40:45.689 ************************************ 00:40:45.689 START TEST ftl_upgrade_shutdown 00:40:45.689 ************************************ 00:40:45.689 11:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:40:45.689 * Looking for test storage... 00:40:45.689 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:40:45.689 11:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:45.689 11:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:40:45.689 11:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:45.948 11:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:45.948 11:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:45.948 11:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:45.948 11:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:45.948 11:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:40:45.948 11:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:40:45.948 11:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:40:45.948 11:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:40:45.948 11:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:40:45.948 11:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:45.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:45.949 --rc genhtml_branch_coverage=1 00:40:45.949 --rc genhtml_function_coverage=1 00:40:45.949 --rc genhtml_legend=1 00:40:45.949 --rc geninfo_all_blocks=1 00:40:45.949 --rc geninfo_unexecuted_blocks=1 00:40:45.949 00:40:45.949 ' 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:45.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:45.949 --rc genhtml_branch_coverage=1 00:40:45.949 --rc genhtml_function_coverage=1 00:40:45.949 --rc genhtml_legend=1 00:40:45.949 --rc geninfo_all_blocks=1 00:40:45.949 --rc geninfo_unexecuted_blocks=1 00:40:45.949 00:40:45.949 ' 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:45.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:45.949 --rc genhtml_branch_coverage=1 00:40:45.949 --rc genhtml_function_coverage=1 00:40:45.949 --rc genhtml_legend=1 00:40:45.949 --rc geninfo_all_blocks=1 00:40:45.949 --rc geninfo_unexecuted_blocks=1 00:40:45.949 00:40:45.949 ' 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:45.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:45.949 --rc genhtml_branch_coverage=1 00:40:45.949 --rc genhtml_function_coverage=1 00:40:45.949 --rc genhtml_legend=1 00:40:45.949 --rc geninfo_all_blocks=1 00:40:45.949 --rc geninfo_unexecuted_blocks=1 00:40:45.949 00:40:45.949 ' 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83995 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83995 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83995 ']' 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:45.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:45.949 11:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:40:45.949 [2024-11-20 11:54:51.636831] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:40:45.949 [2024-11-20 11:54:51.636989] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83995 ] 00:40:46.208 [2024-11-20 11:54:51.817868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:46.466 [2024-11-20 11:54:51.979189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:47.402 11:54:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:47.402 11:54:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:40:47.402 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:40:47.402 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:40:47.402 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:40:47.402 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:40:47.402 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:40:47.402 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:40:47.402 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:40:47.402 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:40:47.402 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:40:47.402 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:40:47.402 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:40:47.402 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:40:47.402 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:40:47.402 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:40:47.402 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:40:47.402 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:40:47.402 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:40:47.402 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:40:47.402 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:40:47.402 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:40:47.402 11:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:40:47.662 11:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:40:47.662 11:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:40:47.662 11:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:40:47.662 11:54:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:40:47.662 11:54:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:40:47.662 11:54:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:40:47.662 11:54:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:40:47.662 11:54:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:40:47.921 11:54:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:40:47.921 { 00:40:47.921 "name": "basen1", 00:40:47.921 "aliases": [ 00:40:47.921 "e57bb267-7c77-40b6-b128-08929f8314d4" 00:40:47.921 ], 00:40:47.921 "product_name": "NVMe disk", 00:40:47.921 "block_size": 4096, 00:40:47.921 "num_blocks": 1310720, 00:40:47.921 "uuid": "e57bb267-7c77-40b6-b128-08929f8314d4", 00:40:47.921 "numa_id": -1, 00:40:47.921 "assigned_rate_limits": { 00:40:47.921 "rw_ios_per_sec": 0, 00:40:47.921 "rw_mbytes_per_sec": 0, 00:40:47.921 "r_mbytes_per_sec": 0, 00:40:47.921 "w_mbytes_per_sec": 0 00:40:47.921 }, 00:40:47.921 "claimed": true, 00:40:47.921 "claim_type": "read_many_write_one", 00:40:47.921 "zoned": false, 00:40:47.921 "supported_io_types": { 00:40:47.921 "read": true, 00:40:47.921 "write": true, 00:40:47.921 "unmap": true, 00:40:47.921 "flush": true, 00:40:47.921 "reset": true, 00:40:47.921 "nvme_admin": true, 00:40:47.921 "nvme_io": true, 00:40:47.921 "nvme_io_md": false, 00:40:47.921 "write_zeroes": true, 00:40:47.921 "zcopy": false, 00:40:47.921 "get_zone_info": false, 00:40:47.921 "zone_management": false, 00:40:47.921 "zone_append": false, 00:40:47.921 "compare": true, 00:40:47.921 "compare_and_write": false, 00:40:47.921 "abort": true, 00:40:47.921 "seek_hole": false, 00:40:47.921 "seek_data": false, 00:40:47.921 "copy": true, 00:40:47.921 "nvme_iov_md": false 00:40:47.921 }, 00:40:47.921 "driver_specific": { 00:40:47.921 "nvme": [ 00:40:47.921 { 00:40:47.921 "pci_address": "0000:00:11.0", 00:40:47.921 "trid": { 00:40:47.921 "trtype": "PCIe", 00:40:47.921 "traddr": "0000:00:11.0" 00:40:47.921 }, 00:40:47.921 "ctrlr_data": { 00:40:47.921 "cntlid": 0, 00:40:47.921 "vendor_id": "0x1b36", 00:40:47.921 "model_number": "QEMU NVMe Ctrl", 00:40:47.921 "serial_number": "12341", 00:40:47.921 "firmware_revision": "8.0.0", 00:40:47.921 "subnqn": "nqn.2019-08.org.qemu:12341", 00:40:47.921 "oacs": { 00:40:47.921 "security": 0, 00:40:47.921 "format": 1, 00:40:47.921 "firmware": 0, 00:40:47.921 "ns_manage": 1 00:40:47.921 }, 00:40:47.921 "multi_ctrlr": false, 00:40:47.921 "ana_reporting": false 00:40:47.921 }, 00:40:47.921 "vs": { 00:40:47.921 "nvme_version": "1.4" 00:40:47.921 }, 00:40:47.921 "ns_data": { 00:40:47.921 "id": 1, 00:40:47.921 "can_share": false 00:40:47.921 } 00:40:47.921 } 00:40:47.921 ], 00:40:47.921 "mp_policy": "active_passive" 00:40:47.921 } 00:40:47.921 } 00:40:47.921 ]' 00:40:47.921 11:54:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:40:47.921 11:54:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:40:47.921 11:54:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:40:47.921 11:54:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:40:47.921 11:54:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:40:47.921 11:54:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:40:47.921 11:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:40:47.921 11:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:40:47.921 11:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:40:47.921 11:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:40:47.921 11:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:40:48.179 11:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=fab90042-d7b6-4e58-87d2-c747c9130f27 00:40:48.179 11:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:40:48.179 11:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fab90042-d7b6-4e58-87d2-c747c9130f27 00:40:48.438 11:54:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:40:48.696 11:54:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=69a96526-865d-45bc-8ac3-b171123225a5 00:40:48.696 11:54:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 69a96526-865d-45bc-8ac3-b171123225a5 00:40:48.956 11:54:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=696c98cb-e610-4f11-b55f-99e6ace91f82 00:40:48.956 11:54:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 696c98cb-e610-4f11-b55f-99e6ace91f82 ]] 00:40:48.956 11:54:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 696c98cb-e610-4f11-b55f-99e6ace91f82 5120 00:40:48.956 11:54:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:40:48.956 11:54:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:40:48.956 11:54:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=696c98cb-e610-4f11-b55f-99e6ace91f82 00:40:48.956 11:54:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:40:48.956 11:54:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 696c98cb-e610-4f11-b55f-99e6ace91f82 00:40:48.956 11:54:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=696c98cb-e610-4f11-b55f-99e6ace91f82 00:40:48.956 11:54:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:40:48.956 11:54:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:40:48.956 11:54:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:40:48.956 11:54:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 696c98cb-e610-4f11-b55f-99e6ace91f82 00:40:49.215 11:54:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:40:49.215 { 00:40:49.215 "name": "696c98cb-e610-4f11-b55f-99e6ace91f82", 00:40:49.215 "aliases": [ 00:40:49.215 "lvs/basen1p0" 00:40:49.215 ], 00:40:49.215 "product_name": "Logical Volume", 00:40:49.215 "block_size": 4096, 00:40:49.215 "num_blocks": 5242880, 00:40:49.215 "uuid": "696c98cb-e610-4f11-b55f-99e6ace91f82", 00:40:49.215 "assigned_rate_limits": { 00:40:49.215 "rw_ios_per_sec": 0, 00:40:49.215 "rw_mbytes_per_sec": 0, 00:40:49.215 "r_mbytes_per_sec": 0, 00:40:49.215 "w_mbytes_per_sec": 0 00:40:49.215 }, 00:40:49.215 "claimed": false, 00:40:49.215 "zoned": false, 00:40:49.215 "supported_io_types": { 00:40:49.215 "read": true, 00:40:49.215 "write": true, 00:40:49.215 "unmap": true, 00:40:49.215 "flush": false, 00:40:49.215 "reset": true, 00:40:49.215 "nvme_admin": false, 00:40:49.215 "nvme_io": false, 00:40:49.215 "nvme_io_md": false, 00:40:49.215 "write_zeroes": true, 00:40:49.215 "zcopy": false, 00:40:49.215 "get_zone_info": false, 00:40:49.215 "zone_management": false, 00:40:49.215 "zone_append": false, 00:40:49.215 "compare": false, 00:40:49.215 "compare_and_write": false, 00:40:49.215 "abort": false, 00:40:49.215 "seek_hole": true, 00:40:49.215 "seek_data": true, 00:40:49.215 "copy": false, 00:40:49.215 "nvme_iov_md": false 00:40:49.215 }, 00:40:49.215 "driver_specific": { 00:40:49.215 "lvol": { 00:40:49.215 "lvol_store_uuid": "69a96526-865d-45bc-8ac3-b171123225a5", 00:40:49.215 "base_bdev": "basen1", 00:40:49.215 "thin_provision": true, 00:40:49.215 "num_allocated_clusters": 0, 00:40:49.215 "snapshot": false, 00:40:49.215 "clone": false, 00:40:49.215 "esnap_clone": false 00:40:49.215 } 00:40:49.215 } 00:40:49.215 } 00:40:49.215 ]' 00:40:49.215 11:54:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:40:49.215 11:54:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:40:49.215 11:54:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:40:49.472 11:54:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:40:49.472 11:54:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:40:49.473 11:54:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:40:49.473 11:54:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:40:49.473 11:54:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:40:49.473 11:54:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:40:49.731 11:54:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:40:49.731 11:54:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:40:49.731 11:54:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:40:49.989 11:54:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:40:49.989 11:54:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:40:49.989 11:54:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 696c98cb-e610-4f11-b55f-99e6ace91f82 -c cachen1p0 --l2p_dram_limit 2 00:40:50.248 [2024-11-20 11:54:55.956781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:50.248 [2024-11-20 11:54:55.956842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:40:50.249 [2024-11-20 11:54:55.956867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:40:50.249 [2024-11-20 11:54:55.956878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:50.249 [2024-11-20 11:54:55.956941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:50.249 [2024-11-20 11:54:55.956957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:40:50.249 [2024-11-20 11:54:55.956972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:40:50.249 [2024-11-20 11:54:55.956982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:50.249 [2024-11-20 11:54:55.957011] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:40:50.249 [2024-11-20 11:54:55.957833] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:40:50.249 [2024-11-20 11:54:55.957892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:50.249 [2024-11-20 11:54:55.957906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:40:50.249 [2024-11-20 11:54:55.957920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.884 ms 00:40:50.249 [2024-11-20 11:54:55.957931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:50.249 [2024-11-20 11:54:55.958020] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID bb4a33d2-f5a4-4ef7-b326-f904d00d0155 00:40:50.249 [2024-11-20 11:54:55.960356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:50.249 [2024-11-20 11:54:55.960395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:40:50.249 [2024-11-20 11:54:55.960411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:40:50.249 [2024-11-20 11:54:55.960425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:50.249 [2024-11-20 11:54:55.973969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:50.249 [2024-11-20 11:54:55.974026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:40:50.249 [2024-11-20 11:54:55.974046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.481 ms 00:40:50.249 [2024-11-20 11:54:55.974060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:50.249 [2024-11-20 11:54:55.974130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:50.249 [2024-11-20 11:54:55.974150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:40:50.249 [2024-11-20 11:54:55.974163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:40:50.249 [2024-11-20 11:54:55.974180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:50.249 [2024-11-20 11:54:55.974276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:50.249 [2024-11-20 11:54:55.974302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:40:50.249 [2024-11-20 11:54:55.974315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:40:50.249 [2024-11-20 11:54:55.974350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:50.249 [2024-11-20 11:54:55.974382] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:40:50.249 [2024-11-20 11:54:55.979743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:50.249 [2024-11-20 11:54:55.979781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:40:50.249 [2024-11-20 11:54:55.979812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.367 ms 00:40:50.249 [2024-11-20 11:54:55.979823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:50.249 [2024-11-20 11:54:55.979861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:50.249 [2024-11-20 11:54:55.979875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:40:50.249 [2024-11-20 11:54:55.979890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:40:50.249 [2024-11-20 11:54:55.979901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:50.249 [2024-11-20 11:54:55.979948] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:40:50.249 [2024-11-20 11:54:55.980091] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:40:50.249 [2024-11-20 11:54:55.980116] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:40:50.249 [2024-11-20 11:54:55.980131] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:40:50.249 [2024-11-20 11:54:55.980148] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:40:50.249 [2024-11-20 11:54:55.980161] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:40:50.249 [2024-11-20 11:54:55.980174] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:40:50.249 [2024-11-20 11:54:55.980185] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:40:50.249 [2024-11-20 11:54:55.980201] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:40:50.249 [2024-11-20 11:54:55.980212] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:40:50.249 [2024-11-20 11:54:55.980226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:50.249 [2024-11-20 11:54:55.980236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:40:50.249 [2024-11-20 11:54:55.980249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.282 ms 00:40:50.249 [2024-11-20 11:54:55.980261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:50.249 [2024-11-20 11:54:55.980360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:50.249 [2024-11-20 11:54:55.980375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:40:50.249 [2024-11-20 11:54:55.980389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:40:50.249 [2024-11-20 11:54:55.980412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:50.249 [2024-11-20 11:54:55.980524] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:40:50.249 [2024-11-20 11:54:55.980566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:40:50.249 [2024-11-20 11:54:55.980585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:40:50.249 [2024-11-20 11:54:55.980596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:40:50.249 [2024-11-20 11:54:55.980610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:40:50.249 [2024-11-20 11:54:55.980621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:40:50.249 [2024-11-20 11:54:55.980634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:40:50.249 [2024-11-20 11:54:55.980644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:40:50.249 [2024-11-20 11:54:55.980656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:40:50.249 [2024-11-20 11:54:55.980666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:40:50.249 [2024-11-20 11:54:55.980679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:40:50.249 [2024-11-20 11:54:55.980689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:40:50.249 [2024-11-20 11:54:55.980715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:40:50.249 [2024-11-20 11:54:55.980724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:40:50.249 [2024-11-20 11:54:55.980736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:40:50.249 [2024-11-20 11:54:55.980746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:40:50.249 [2024-11-20 11:54:55.980763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:40:50.249 [2024-11-20 11:54:55.980773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:40:50.249 [2024-11-20 11:54:55.980785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:40:50.249 [2024-11-20 11:54:55.980796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:40:50.249 [2024-11-20 11:54:55.980809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:40:50.249 [2024-11-20 11:54:55.980821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:40:50.249 [2024-11-20 11:54:55.980833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:40:50.249 [2024-11-20 11:54:55.980842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:40:50.249 [2024-11-20 11:54:55.980854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:40:50.249 [2024-11-20 11:54:55.980863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:40:50.249 [2024-11-20 11:54:55.980874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:40:50.250 [2024-11-20 11:54:55.980883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:40:50.250 [2024-11-20 11:54:55.980895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:40:50.250 [2024-11-20 11:54:55.980904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:40:50.250 [2024-11-20 11:54:55.980916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:40:50.250 [2024-11-20 11:54:55.980925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:40:50.250 [2024-11-20 11:54:55.980940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:40:50.250 [2024-11-20 11:54:55.980949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:40:50.250 [2024-11-20 11:54:55.980960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:40:50.250 [2024-11-20 11:54:55.980970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:40:50.250 [2024-11-20 11:54:55.980981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:40:50.250 [2024-11-20 11:54:55.980991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:40:50.250 [2024-11-20 11:54:55.981002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:40:50.250 [2024-11-20 11:54:55.981011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:40:50.250 [2024-11-20 11:54:55.981023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:40:50.250 [2024-11-20 11:54:55.981032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:40:50.250 [2024-11-20 11:54:55.981044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:40:50.250 [2024-11-20 11:54:55.981053] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:40:50.250 [2024-11-20 11:54:55.981068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:40:50.250 [2024-11-20 11:54:55.981078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:40:50.250 [2024-11-20 11:54:55.981090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:40:50.250 [2024-11-20 11:54:55.981100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:40:50.250 [2024-11-20 11:54:55.981115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:40:50.250 [2024-11-20 11:54:55.981125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:40:50.250 [2024-11-20 11:54:55.981137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:40:50.250 [2024-11-20 11:54:55.981146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:40:50.250 [2024-11-20 11:54:55.981160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:40:50.250 [2024-11-20 11:54:55.981176] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:40:50.250 [2024-11-20 11:54:55.981192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:50.250 [2024-11-20 11:54:55.981207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:40:50.250 [2024-11-20 11:54:55.981221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:40:50.250 [2024-11-20 11:54:55.981231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:40:50.250 [2024-11-20 11:54:55.981243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:40:50.250 [2024-11-20 11:54:55.981253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:40:50.250 [2024-11-20 11:54:55.981266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:40:50.250 [2024-11-20 11:54:55.981276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:40:50.250 [2024-11-20 11:54:55.981289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:40:50.250 [2024-11-20 11:54:55.981300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:40:50.250 [2024-11-20 11:54:55.981316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:40:50.250 [2024-11-20 11:54:55.981342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:40:50.250 [2024-11-20 11:54:55.981356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:40:50.250 [2024-11-20 11:54:55.981367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:40:50.250 [2024-11-20 11:54:55.981380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:40:50.250 [2024-11-20 11:54:55.981391] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:40:50.250 [2024-11-20 11:54:55.981405] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:50.250 [2024-11-20 11:54:55.981417] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:40:50.250 [2024-11-20 11:54:55.981459] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:40:50.250 [2024-11-20 11:54:55.981471] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:40:50.250 [2024-11-20 11:54:55.981485] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:40:50.250 [2024-11-20 11:54:55.981497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:50.250 [2024-11-20 11:54:55.981510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:40:50.250 [2024-11-20 11:54:55.981522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.034 ms 00:40:50.250 [2024-11-20 11:54:55.981536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:50.250 [2024-11-20 11:54:55.981607] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:40:50.250 [2024-11-20 11:54:55.981633] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:40:53.535 [2024-11-20 11:54:58.578069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.535 [2024-11-20 11:54:58.578128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:40:53.535 [2024-11-20 11:54:58.578150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2596.470 ms 00:40:53.535 [2024-11-20 11:54:58.578165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.535 [2024-11-20 11:54:58.617312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.535 [2024-11-20 11:54:58.617407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:40:53.535 [2024-11-20 11:54:58.617451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.807 ms 00:40:53.535 [2024-11-20 11:54:58.617484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.535 [2024-11-20 11:54:58.617674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.535 [2024-11-20 11:54:58.617708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:40:53.535 [2024-11-20 11:54:58.617723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:40:53.535 [2024-11-20 11:54:58.617742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.535 [2024-11-20 11:54:58.660043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.535 [2024-11-20 11:54:58.660099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:40:53.535 [2024-11-20 11:54:58.660117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.224 ms 00:40:53.535 [2024-11-20 11:54:58.660131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.535 [2024-11-20 11:54:58.660178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.535 [2024-11-20 11:54:58.660201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:40:53.535 [2024-11-20 11:54:58.660213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:40:53.535 [2024-11-20 11:54:58.660226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.535 [2024-11-20 11:54:58.661066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.535 [2024-11-20 11:54:58.661102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:40:53.535 [2024-11-20 11:54:58.661116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.764 ms 00:40:53.535 [2024-11-20 11:54:58.661129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.535 [2024-11-20 11:54:58.661197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.535 [2024-11-20 11:54:58.661214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:40:53.535 [2024-11-20 11:54:58.661229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:40:53.535 [2024-11-20 11:54:58.661245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.535 [2024-11-20 11:54:58.682495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.535 [2024-11-20 11:54:58.682549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:40:53.535 [2024-11-20 11:54:58.682567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.227 ms 00:40:53.535 [2024-11-20 11:54:58.682581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.535 [2024-11-20 11:54:58.695788] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:40:53.535 [2024-11-20 11:54:58.697622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.535 [2024-11-20 11:54:58.697655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:40:53.535 [2024-11-20 11:54:58.697690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.904 ms 00:40:53.535 [2024-11-20 11:54:58.697702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.535 [2024-11-20 11:54:58.730955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.535 [2024-11-20 11:54:58.730996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:40:53.535 [2024-11-20 11:54:58.731017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.216 ms 00:40:53.535 [2024-11-20 11:54:58.731029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.535 [2024-11-20 11:54:58.731139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.535 [2024-11-20 11:54:58.731162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:40:53.535 [2024-11-20 11:54:58.731180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.063 ms 00:40:53.535 [2024-11-20 11:54:58.731191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.535 [2024-11-20 11:54:58.756452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.535 [2024-11-20 11:54:58.756499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:40:53.535 [2024-11-20 11:54:58.756535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.199 ms 00:40:53.535 [2024-11-20 11:54:58.756577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.535 [2024-11-20 11:54:58.782764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.535 [2024-11-20 11:54:58.782802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:40:53.535 [2024-11-20 11:54:58.782837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.128 ms 00:40:53.535 [2024-11-20 11:54:58.782848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.535 [2024-11-20 11:54:58.783647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.535 [2024-11-20 11:54:58.783677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:40:53.535 [2024-11-20 11:54:58.783710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.751 ms 00:40:53.535 [2024-11-20 11:54:58.783722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.535 [2024-11-20 11:54:58.864755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.535 [2024-11-20 11:54:58.864794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:40:53.535 [2024-11-20 11:54:58.864817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 80.956 ms 00:40:53.536 [2024-11-20 11:54:58.864828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.536 [2024-11-20 11:54:58.893754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.536 [2024-11-20 11:54:58.893796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:40:53.536 [2024-11-20 11:54:58.893830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.832 ms 00:40:53.536 [2024-11-20 11:54:58.893842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.536 [2024-11-20 11:54:58.919750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.536 [2024-11-20 11:54:58.919785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:40:53.536 [2024-11-20 11:54:58.919803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.857 ms 00:40:53.536 [2024-11-20 11:54:58.919813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.536 [2024-11-20 11:54:58.944847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.536 [2024-11-20 11:54:58.944886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:40:53.536 [2024-11-20 11:54:58.944905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.986 ms 00:40:53.536 [2024-11-20 11:54:58.944915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.536 [2024-11-20 11:54:58.944969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.536 [2024-11-20 11:54:58.944986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:40:53.536 [2024-11-20 11:54:58.945006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:40:53.536 [2024-11-20 11:54:58.945017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.536 [2024-11-20 11:54:58.945122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.536 [2024-11-20 11:54:58.945140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:40:53.536 [2024-11-20 11:54:58.945158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:40:53.536 [2024-11-20 11:54:58.945168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.536 [2024-11-20 11:54:58.946687] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2989.401 ms, result 0 00:40:53.536 { 00:40:53.536 "name": "ftl", 00:40:53.536 "uuid": "bb4a33d2-f5a4-4ef7-b326-f904d00d0155" 00:40:53.536 } 00:40:53.536 11:54:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:40:53.536 [2024-11-20 11:54:59.261577] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:53.536 11:54:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:40:54.102 11:54:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:40:54.102 [2024-11-20 11:54:59.830131] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:40:54.102 11:54:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:40:54.360 [2024-11-20 11:55:00.067718] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:40:54.360 11:55:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:40:54.927 11:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:40:54.927 11:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:40:54.927 11:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:40:54.927 11:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:40:54.927 11:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:40:54.927 11:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:40:54.927 11:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:40:54.927 11:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:40:54.927 11:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:40:54.927 11:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:40:54.927 Fill FTL, iteration 1 00:40:54.927 11:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:40:54.927 11:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:40:54.927 11:55:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:40:54.927 11:55:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:40:54.927 11:55:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:40:54.927 11:55:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:40:54.927 11:55:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=84123 00:40:54.927 11:55:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:40:54.927 11:55:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:40:54.927 11:55:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 84123 /var/tmp/spdk.tgt.sock 00:40:54.927 11:55:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84123 ']' 00:40:54.927 11:55:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:40:54.927 11:55:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:54.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:40:54.927 11:55:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:40:54.927 11:55:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:54.927 11:55:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:40:54.927 [2024-11-20 11:55:00.671334] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:40:54.927 [2024-11-20 11:55:00.671555] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84123 ] 00:40:55.184 [2024-11-20 11:55:00.851848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:55.443 [2024-11-20 11:55:00.974218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:56.377 11:55:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:56.377 11:55:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:40:56.377 11:55:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:40:56.634 ftln1 00:40:56.634 11:55:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:40:56.634 11:55:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:40:56.890 11:55:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:40:56.890 11:55:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 84123 00:40:56.890 11:55:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84123 ']' 00:40:56.890 11:55:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84123 00:40:56.890 11:55:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:40:56.890 11:55:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:56.890 11:55:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84123 00:40:56.890 killing process with pid 84123 00:40:56.890 11:55:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:56.890 11:55:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:56.890 11:55:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84123' 00:40:56.890 11:55:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84123 00:40:56.890 11:55:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84123 00:40:58.791 11:55:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:40:58.791 11:55:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:40:58.791 [2024-11-20 11:55:04.502210] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:40:58.791 [2024-11-20 11:55:04.502655] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84170 ] 00:40:59.050 [2024-11-20 11:55:04.686411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:59.308 [2024-11-20 11:55:04.821269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:00.683  [2024-11-20T11:55:07.383Z] Copying: 209/1024 [MB] (209 MBps) [2024-11-20T11:55:08.317Z] Copying: 425/1024 [MB] (216 MBps) [2024-11-20T11:55:09.693Z] Copying: 638/1024 [MB] (213 MBps) [2024-11-20T11:55:10.259Z] Copying: 852/1024 [MB] (214 MBps) [2024-11-20T11:55:11.194Z] Copying: 1024/1024 [MB] (average 212 MBps) 00:41:05.428 00:41:05.428 Calculate MD5 checksum, iteration 1 00:41:05.428 11:55:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:41:05.428 11:55:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:41:05.428 11:55:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:41:05.428 11:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:41:05.428 11:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:41:05.428 11:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:41:05.428 11:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:41:05.428 11:55:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:41:05.688 [2024-11-20 11:55:11.201389] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:41:05.688 [2024-11-20 11:55:11.201654] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84240 ] 00:41:05.688 [2024-11-20 11:55:11.386000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:05.947 [2024-11-20 11:55:11.509223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:07.350  [2024-11-20T11:55:14.051Z] Copying: 496/1024 [MB] (496 MBps) [2024-11-20T11:55:14.051Z] Copying: 982/1024 [MB] (486 MBps) [2024-11-20T11:55:14.985Z] Copying: 1024/1024 [MB] (average 492 MBps) 00:41:09.219 00:41:09.219 11:55:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:41:09.219 11:55:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:41:11.121 11:55:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:41:11.121 11:55:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=349dee8ca5ce75ec0eaff1d1b8eac7b3 00:41:11.121 11:55:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:41:11.121 11:55:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:41:11.121 11:55:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:41:11.121 Fill FTL, iteration 2 00:41:11.121 11:55:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:41:11.121 11:55:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:41:11.121 11:55:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:41:11.121 11:55:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:41:11.121 11:55:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:41:11.121 11:55:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:41:11.379 [2024-11-20 11:55:16.926760] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:41:11.379 [2024-11-20 11:55:16.926952] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84302 ] 00:41:11.379 [2024-11-20 11:55:17.119102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:11.637 [2024-11-20 11:55:17.264435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:13.013  [2024-11-20T11:55:20.157Z] Copying: 207/1024 [MB] (207 MBps) [2024-11-20T11:55:20.724Z] Copying: 409/1024 [MB] (202 MBps) [2024-11-20T11:55:22.100Z] Copying: 613/1024 [MB] (204 MBps) [2024-11-20T11:55:23.036Z] Copying: 820/1024 [MB] (207 MBps) [2024-11-20T11:55:23.972Z] Copying: 1024/1024 [MB] (average 205 MBps) 00:41:18.206 00:41:18.206 Calculate MD5 checksum, iteration 2 00:41:18.206 11:55:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:41:18.206 11:55:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:41:18.206 11:55:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:41:18.206 11:55:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:41:18.206 11:55:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:41:18.206 11:55:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:41:18.206 11:55:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:41:18.206 11:55:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:41:18.206 [2024-11-20 11:55:23.813508] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:41:18.206 [2024-11-20 11:55:23.813721] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84366 ] 00:41:18.465 [2024-11-20 11:55:23.993830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:18.465 [2024-11-20 11:55:24.110553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:20.369  [2024-11-20T11:55:27.074Z] Copying: 458/1024 [MB] (458 MBps) [2024-11-20T11:55:27.074Z] Copying: 928/1024 [MB] (470 MBps) [2024-11-20T11:55:28.447Z] Copying: 1024/1024 [MB] (average 462 MBps) 00:41:22.681 00:41:22.681 11:55:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:41:22.681 11:55:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:41:24.584 11:55:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:41:24.584 11:55:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=d754f1c1d7e1d8e30654f548adc7f14b 00:41:24.584 11:55:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:41:24.584 11:55:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:41:24.584 11:55:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:41:24.843 [2024-11-20 11:55:30.449229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:24.843 [2024-11-20 11:55:30.449303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:41:24.843 [2024-11-20 11:55:30.449326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:41:24.843 [2024-11-20 11:55:30.449337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:24.843 [2024-11-20 11:55:30.449368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:24.843 [2024-11-20 11:55:30.449383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:41:24.843 [2024-11-20 11:55:30.449394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:41:24.843 [2024-11-20 11:55:30.449411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:24.843 [2024-11-20 11:55:30.449436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:24.843 [2024-11-20 11:55:30.449477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:41:24.843 [2024-11-20 11:55:30.449498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:41:24.843 [2024-11-20 11:55:30.449509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:24.843 [2024-11-20 11:55:30.449612] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.365 ms, result 0 00:41:24.843 true 00:41:24.843 11:55:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:41:25.107 { 00:41:25.107 "name": "ftl", 00:41:25.107 "properties": [ 00:41:25.107 { 00:41:25.107 "name": "superblock_version", 00:41:25.107 "value": 5, 00:41:25.107 "read-only": true 00:41:25.107 }, 00:41:25.107 { 00:41:25.107 "name": "base_device", 00:41:25.107 "bands": [ 00:41:25.107 { 00:41:25.107 "id": 0, 00:41:25.107 "state": "FREE", 00:41:25.107 "validity": 0.0 00:41:25.107 }, 00:41:25.107 { 00:41:25.107 "id": 1, 00:41:25.107 "state": "FREE", 00:41:25.107 "validity": 0.0 00:41:25.107 }, 00:41:25.107 { 00:41:25.107 "id": 2, 00:41:25.107 "state": "FREE", 00:41:25.107 "validity": 0.0 00:41:25.107 }, 00:41:25.107 { 00:41:25.107 "id": 3, 00:41:25.107 "state": "FREE", 00:41:25.107 "validity": 0.0 00:41:25.107 }, 00:41:25.107 { 00:41:25.107 "id": 4, 00:41:25.107 "state": "FREE", 00:41:25.107 "validity": 0.0 00:41:25.107 }, 00:41:25.107 { 00:41:25.107 "id": 5, 00:41:25.107 "state": "FREE", 00:41:25.107 "validity": 0.0 00:41:25.107 }, 00:41:25.107 { 00:41:25.107 "id": 6, 00:41:25.107 "state": "FREE", 00:41:25.107 "validity": 0.0 00:41:25.107 }, 00:41:25.107 { 00:41:25.107 "id": 7, 00:41:25.107 "state": "FREE", 00:41:25.107 "validity": 0.0 00:41:25.107 }, 00:41:25.107 { 00:41:25.107 "id": 8, 00:41:25.108 "state": "FREE", 00:41:25.108 "validity": 0.0 00:41:25.108 }, 00:41:25.108 { 00:41:25.108 "id": 9, 00:41:25.108 "state": "FREE", 00:41:25.108 "validity": 0.0 00:41:25.108 }, 00:41:25.108 { 00:41:25.108 "id": 10, 00:41:25.108 "state": "FREE", 00:41:25.108 "validity": 0.0 00:41:25.108 }, 00:41:25.108 { 00:41:25.108 "id": 11, 00:41:25.108 "state": "FREE", 00:41:25.108 "validity": 0.0 00:41:25.108 }, 00:41:25.108 { 00:41:25.108 "id": 12, 00:41:25.108 "state": "FREE", 00:41:25.108 "validity": 0.0 00:41:25.108 }, 00:41:25.108 { 00:41:25.108 "id": 13, 00:41:25.108 "state": "FREE", 00:41:25.108 "validity": 0.0 00:41:25.108 }, 00:41:25.108 { 00:41:25.108 "id": 14, 00:41:25.108 "state": "FREE", 00:41:25.108 "validity": 0.0 00:41:25.108 }, 00:41:25.108 { 00:41:25.108 "id": 15, 00:41:25.108 "state": "FREE", 00:41:25.108 "validity": 0.0 00:41:25.108 }, 00:41:25.108 { 00:41:25.108 "id": 16, 00:41:25.108 "state": "FREE", 00:41:25.108 "validity": 0.0 00:41:25.108 }, 00:41:25.108 { 00:41:25.108 "id": 17, 00:41:25.108 "state": "FREE", 00:41:25.108 "validity": 0.0 00:41:25.108 } 00:41:25.108 ], 00:41:25.108 "read-only": true 00:41:25.108 }, 00:41:25.108 { 00:41:25.108 "name": "cache_device", 00:41:25.108 "type": "bdev", 00:41:25.108 "chunks": [ 00:41:25.108 { 00:41:25.108 "id": 0, 00:41:25.108 "state": "INACTIVE", 00:41:25.108 "utilization": 0.0 00:41:25.108 }, 00:41:25.108 { 00:41:25.108 "id": 1, 00:41:25.108 "state": "CLOSED", 00:41:25.108 "utilization": 1.0 00:41:25.108 }, 00:41:25.108 { 00:41:25.108 "id": 2, 00:41:25.108 "state": "CLOSED", 00:41:25.108 "utilization": 1.0 00:41:25.108 }, 00:41:25.108 { 00:41:25.108 "id": 3, 00:41:25.108 "state": "OPEN", 00:41:25.108 "utilization": 0.001953125 00:41:25.108 }, 00:41:25.108 { 00:41:25.108 "id": 4, 00:41:25.108 "state": "OPEN", 00:41:25.108 "utilization": 0.0 00:41:25.108 } 00:41:25.108 ], 00:41:25.108 "read-only": true 00:41:25.108 }, 00:41:25.108 { 00:41:25.108 "name": "verbose_mode", 00:41:25.108 "value": true, 00:41:25.108 "unit": "", 00:41:25.108 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:41:25.108 }, 00:41:25.108 { 00:41:25.108 "name": "prep_upgrade_on_shutdown", 00:41:25.108 "value": false, 00:41:25.108 "unit": "", 00:41:25.108 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:41:25.108 } 00:41:25.108 ] 00:41:25.108 } 00:41:25.108 11:55:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:41:25.372 [2024-11-20 11:55:30.995302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:25.372 [2024-11-20 11:55:30.995363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:41:25.372 [2024-11-20 11:55:30.995382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:41:25.372 [2024-11-20 11:55:30.995393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:25.372 [2024-11-20 11:55:30.995422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:25.372 [2024-11-20 11:55:30.995437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:41:25.372 [2024-11-20 11:55:30.995448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:41:25.372 [2024-11-20 11:55:30.995458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:25.372 [2024-11-20 11:55:30.995481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:25.372 [2024-11-20 11:55:30.995493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:41:25.372 [2024-11-20 11:55:30.995503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:41:25.372 [2024-11-20 11:55:30.995513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:25.372 [2024-11-20 11:55:30.995600] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.278 ms, result 0 00:41:25.372 true 00:41:25.372 11:55:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:41:25.372 11:55:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:41:25.372 11:55:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:41:25.675 11:55:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:41:25.675 11:55:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:41:25.675 11:55:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:41:25.999 [2024-11-20 11:55:31.579898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:25.999 [2024-11-20 11:55:31.580171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:41:25.999 [2024-11-20 11:55:31.580204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:41:25.999 [2024-11-20 11:55:31.580216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:25.999 [2024-11-20 11:55:31.580254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:25.999 [2024-11-20 11:55:31.580271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:41:25.999 [2024-11-20 11:55:31.580283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:41:25.999 [2024-11-20 11:55:31.580311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:25.999 [2024-11-20 11:55:31.580336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:25.999 [2024-11-20 11:55:31.580349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:41:25.999 [2024-11-20 11:55:31.580362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:41:25.999 [2024-11-20 11:55:31.580372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:25.999 [2024-11-20 11:55:31.580449] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.531 ms, result 0 00:41:25.999 true 00:41:25.999 11:55:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:41:26.258 { 00:41:26.258 "name": "ftl", 00:41:26.258 "properties": [ 00:41:26.258 { 00:41:26.258 "name": "superblock_version", 00:41:26.258 "value": 5, 00:41:26.258 "read-only": true 00:41:26.258 }, 00:41:26.258 { 00:41:26.258 "name": "base_device", 00:41:26.258 "bands": [ 00:41:26.258 { 00:41:26.258 "id": 0, 00:41:26.258 "state": "FREE", 00:41:26.258 "validity": 0.0 00:41:26.258 }, 00:41:26.258 { 00:41:26.258 "id": 1, 00:41:26.258 "state": "FREE", 00:41:26.258 "validity": 0.0 00:41:26.258 }, 00:41:26.258 { 00:41:26.258 "id": 2, 00:41:26.258 "state": "FREE", 00:41:26.258 "validity": 0.0 00:41:26.258 }, 00:41:26.258 { 00:41:26.258 "id": 3, 00:41:26.258 "state": "FREE", 00:41:26.258 "validity": 0.0 00:41:26.258 }, 00:41:26.258 { 00:41:26.258 "id": 4, 00:41:26.258 "state": "FREE", 00:41:26.258 "validity": 0.0 00:41:26.258 }, 00:41:26.258 { 00:41:26.258 "id": 5, 00:41:26.258 "state": "FREE", 00:41:26.258 "validity": 0.0 00:41:26.258 }, 00:41:26.258 { 00:41:26.258 "id": 6, 00:41:26.258 "state": "FREE", 00:41:26.258 "validity": 0.0 00:41:26.258 }, 00:41:26.258 { 00:41:26.258 "id": 7, 00:41:26.258 "state": "FREE", 00:41:26.258 "validity": 0.0 00:41:26.258 }, 00:41:26.258 { 00:41:26.258 "id": 8, 00:41:26.258 "state": "FREE", 00:41:26.258 "validity": 0.0 00:41:26.258 }, 00:41:26.258 { 00:41:26.258 "id": 9, 00:41:26.258 "state": "FREE", 00:41:26.258 "validity": 0.0 00:41:26.258 }, 00:41:26.258 { 00:41:26.258 "id": 10, 00:41:26.258 "state": "FREE", 00:41:26.258 "validity": 0.0 00:41:26.258 }, 00:41:26.258 { 00:41:26.258 "id": 11, 00:41:26.258 "state": "FREE", 00:41:26.258 "validity": 0.0 00:41:26.258 }, 00:41:26.258 { 00:41:26.258 "id": 12, 00:41:26.258 "state": "FREE", 00:41:26.258 "validity": 0.0 00:41:26.258 }, 00:41:26.258 { 00:41:26.258 "id": 13, 00:41:26.258 "state": "FREE", 00:41:26.258 "validity": 0.0 00:41:26.258 }, 00:41:26.258 { 00:41:26.258 "id": 14, 00:41:26.258 "state": "FREE", 00:41:26.258 "validity": 0.0 00:41:26.258 }, 00:41:26.258 { 00:41:26.258 "id": 15, 00:41:26.258 "state": "FREE", 00:41:26.258 "validity": 0.0 00:41:26.258 }, 00:41:26.258 { 00:41:26.258 "id": 16, 00:41:26.258 "state": "FREE", 00:41:26.258 "validity": 0.0 00:41:26.258 }, 00:41:26.258 { 00:41:26.258 "id": 17, 00:41:26.258 "state": "FREE", 00:41:26.258 "validity": 0.0 00:41:26.258 } 00:41:26.258 ], 00:41:26.258 "read-only": true 00:41:26.258 }, 00:41:26.258 { 00:41:26.258 "name": "cache_device", 00:41:26.258 "type": "bdev", 00:41:26.258 "chunks": [ 00:41:26.258 { 00:41:26.258 "id": 0, 00:41:26.258 "state": "INACTIVE", 00:41:26.258 "utilization": 0.0 00:41:26.258 }, 00:41:26.258 { 00:41:26.258 "id": 1, 00:41:26.258 "state": "CLOSED", 00:41:26.258 "utilization": 1.0 00:41:26.258 }, 00:41:26.258 { 00:41:26.258 "id": 2, 00:41:26.258 "state": "CLOSED", 00:41:26.258 "utilization": 1.0 00:41:26.258 }, 00:41:26.258 { 00:41:26.258 "id": 3, 00:41:26.259 "state": "OPEN", 00:41:26.259 "utilization": 0.001953125 00:41:26.259 }, 00:41:26.259 { 00:41:26.259 "id": 4, 00:41:26.259 "state": "OPEN", 00:41:26.259 "utilization": 0.0 00:41:26.259 } 00:41:26.259 ], 00:41:26.259 "read-only": true 00:41:26.259 }, 00:41:26.259 { 00:41:26.259 "name": "verbose_mode", 00:41:26.259 "value": true, 00:41:26.259 "unit": "", 00:41:26.259 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:41:26.259 }, 00:41:26.259 { 00:41:26.259 "name": "prep_upgrade_on_shutdown", 00:41:26.259 "value": true, 00:41:26.259 "unit": "", 00:41:26.259 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:41:26.259 } 00:41:26.259 ] 00:41:26.259 } 00:41:26.259 11:55:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:41:26.259 11:55:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83995 ]] 00:41:26.259 11:55:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83995 00:41:26.259 11:55:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83995 ']' 00:41:26.259 11:55:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83995 00:41:26.259 11:55:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:41:26.259 11:55:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:26.259 11:55:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83995 00:41:26.259 killing process with pid 83995 00:41:26.259 11:55:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:26.259 11:55:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:26.259 11:55:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83995' 00:41:26.259 11:55:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83995 00:41:26.259 11:55:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83995 00:41:27.195 [2024-11-20 11:55:32.817197] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:41:27.195 [2024-11-20 11:55:32.834031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:27.195 [2024-11-20 11:55:32.834074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:41:27.195 [2024-11-20 11:55:32.834094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:41:27.195 [2024-11-20 11:55:32.834105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:27.195 [2024-11-20 11:55:32.834133] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:41:27.195 [2024-11-20 11:55:32.837585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:27.195 [2024-11-20 11:55:32.837624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:41:27.195 [2024-11-20 11:55:32.837644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.432 ms 00:41:27.195 [2024-11-20 11:55:32.837654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:35.313 [2024-11-20 11:55:40.363578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:35.313 [2024-11-20 11:55:40.363645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:41:35.313 [2024-11-20 11:55:40.363667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7525.913 ms 00:41:35.313 [2024-11-20 11:55:40.363862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:35.313 [2024-11-20 11:55:40.365127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:35.313 [2024-11-20 11:55:40.365203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:41:35.313 [2024-11-20 11:55:40.365219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.242 ms 00:41:35.313 [2024-11-20 11:55:40.365231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:35.313 [2024-11-20 11:55:40.366469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:35.313 [2024-11-20 11:55:40.366726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:41:35.313 [2024-11-20 11:55:40.366755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.179 ms 00:41:35.313 [2024-11-20 11:55:40.366768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:35.313 [2024-11-20 11:55:40.378437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:35.313 [2024-11-20 11:55:40.378671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:41:35.313 [2024-11-20 11:55:40.378699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.571 ms 00:41:35.313 [2024-11-20 11:55:40.378736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:35.313 [2024-11-20 11:55:40.385760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:35.313 [2024-11-20 11:55:40.385981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:41:35.313 [2024-11-20 11:55:40.386009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.975 ms 00:41:35.314 [2024-11-20 11:55:40.386022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:35.314 [2024-11-20 11:55:40.386133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:35.314 [2024-11-20 11:55:40.386153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:41:35.314 [2024-11-20 11:55:40.386174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.065 ms 00:41:35.314 [2024-11-20 11:55:40.386185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:35.314 [2024-11-20 11:55:40.397426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:35.314 [2024-11-20 11:55:40.397660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:41:35.314 [2024-11-20 11:55:40.397688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.220 ms 00:41:35.314 [2024-11-20 11:55:40.397700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:35.314 [2024-11-20 11:55:40.408387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:35.314 [2024-11-20 11:55:40.408600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:41:35.314 [2024-11-20 11:55:40.408627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.598 ms 00:41:35.314 [2024-11-20 11:55:40.408639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:35.314 [2024-11-20 11:55:40.418810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:35.314 [2024-11-20 11:55:40.418846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:41:35.314 [2024-11-20 11:55:40.418861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.126 ms 00:41:35.314 [2024-11-20 11:55:40.418870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:35.314 [2024-11-20 11:55:40.429019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:35.314 [2024-11-20 11:55:40.429055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:41:35.314 [2024-11-20 11:55:40.429071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.074 ms 00:41:35.314 [2024-11-20 11:55:40.429081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:35.314 [2024-11-20 11:55:40.429116] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:41:35.314 [2024-11-20 11:55:40.429137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:41:35.314 [2024-11-20 11:55:40.429151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:41:35.314 [2024-11-20 11:55:40.429189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:41:35.314 [2024-11-20 11:55:40.429200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:41:35.314 [2024-11-20 11:55:40.429211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:41:35.314 [2024-11-20 11:55:40.429221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:41:35.314 [2024-11-20 11:55:40.429231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:41:35.314 [2024-11-20 11:55:40.429241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:41:35.314 [2024-11-20 11:55:40.429252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:41:35.314 [2024-11-20 11:55:40.429262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:41:35.314 [2024-11-20 11:55:40.429273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:41:35.314 [2024-11-20 11:55:40.429288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:41:35.314 [2024-11-20 11:55:40.429298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:41:35.314 [2024-11-20 11:55:40.429308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:41:35.314 [2024-11-20 11:55:40.429318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:41:35.314 [2024-11-20 11:55:40.429339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:41:35.314 [2024-11-20 11:55:40.429349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:41:35.314 [2024-11-20 11:55:40.429358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:41:35.314 [2024-11-20 11:55:40.429371] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:41:35.314 [2024-11-20 11:55:40.429381] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: bb4a33d2-f5a4-4ef7-b326-f904d00d0155 00:41:35.314 [2024-11-20 11:55:40.429392] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:41:35.314 [2024-11-20 11:55:40.429403] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:41:35.314 [2024-11-20 11:55:40.429412] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:41:35.314 [2024-11-20 11:55:40.429423] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:41:35.314 [2024-11-20 11:55:40.429433] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:41:35.314 [2024-11-20 11:55:40.429492] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:41:35.314 [2024-11-20 11:55:40.429504] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:41:35.314 [2024-11-20 11:55:40.429513] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:41:35.314 [2024-11-20 11:55:40.429524] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:41:35.314 [2024-11-20 11:55:40.429536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:35.314 [2024-11-20 11:55:40.429595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:41:35.314 [2024-11-20 11:55:40.429609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.421 ms 00:41:35.314 [2024-11-20 11:55:40.429631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:35.314 [2024-11-20 11:55:40.444794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:35.314 [2024-11-20 11:55:40.444956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:41:35.314 [2024-11-20 11:55:40.444983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.138 ms 00:41:35.314 [2024-11-20 11:55:40.445005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:35.314 [2024-11-20 11:55:40.445527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:35.314 [2024-11-20 11:55:40.445545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:41:35.314 [2024-11-20 11:55:40.445601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.495 ms 00:41:35.314 [2024-11-20 11:55:40.445612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:35.314 [2024-11-20 11:55:40.497591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:41:35.314 [2024-11-20 11:55:40.497642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:41:35.314 [2024-11-20 11:55:40.497667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:41:35.314 [2024-11-20 11:55:40.497680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:35.314 [2024-11-20 11:55:40.497724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:41:35.314 [2024-11-20 11:55:40.497755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:41:35.314 [2024-11-20 11:55:40.497796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:41:35.314 [2024-11-20 11:55:40.497822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:35.314 [2024-11-20 11:55:40.497951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:41:35.314 [2024-11-20 11:55:40.497971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:41:35.314 [2024-11-20 11:55:40.497982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:41:35.314 [2024-11-20 11:55:40.498008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:35.314 [2024-11-20 11:55:40.498042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:41:35.314 [2024-11-20 11:55:40.498055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:41:35.314 [2024-11-20 11:55:40.498067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:41:35.314 [2024-11-20 11:55:40.498078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:35.314 [2024-11-20 11:55:40.595117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:41:35.314 [2024-11-20 11:55:40.595457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:41:35.314 [2024-11-20 11:55:40.595647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:41:35.314 [2024-11-20 11:55:40.595727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:35.314 [2024-11-20 11:55:40.679389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:41:35.314 [2024-11-20 11:55:40.679703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:41:35.314 [2024-11-20 11:55:40.679867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:41:35.314 [2024-11-20 11:55:40.679893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:35.315 [2024-11-20 11:55:40.680018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:41:35.315 [2024-11-20 11:55:40.680038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:41:35.315 [2024-11-20 11:55:40.680066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:41:35.315 [2024-11-20 11:55:40.680078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:35.315 [2024-11-20 11:55:40.680191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:41:35.315 [2024-11-20 11:55:40.680226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:41:35.315 [2024-11-20 11:55:40.680238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:41:35.315 [2024-11-20 11:55:40.680249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:35.315 [2024-11-20 11:55:40.680382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:41:35.315 [2024-11-20 11:55:40.680402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:41:35.315 [2024-11-20 11:55:40.680415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:41:35.315 [2024-11-20 11:55:40.680427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:35.315 [2024-11-20 11:55:40.680476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:41:35.315 [2024-11-20 11:55:40.680500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:41:35.315 [2024-11-20 11:55:40.680513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:41:35.315 [2024-11-20 11:55:40.680524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:35.315 [2024-11-20 11:55:40.680595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:41:35.315 [2024-11-20 11:55:40.680614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:41:35.315 [2024-11-20 11:55:40.680626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:41:35.315 [2024-11-20 11:55:40.680654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:35.315 [2024-11-20 11:55:40.680718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:41:35.315 [2024-11-20 11:55:40.680735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:41:35.315 [2024-11-20 11:55:40.680747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:41:35.315 [2024-11-20 11:55:40.680758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:35.315 [2024-11-20 11:55:40.680916] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7846.877 ms, result 0 00:41:38.603 11:55:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:41:38.603 11:55:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:41:38.603 11:55:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:41:38.603 11:55:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:41:38.603 11:55:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:41:38.603 11:55:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84590 00:41:38.603 11:55:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:41:38.603 11:55:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84590 00:41:38.603 11:55:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:41:38.603 11:55:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84590 ']' 00:41:38.603 11:55:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:38.603 11:55:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:38.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:38.603 11:55:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:38.603 11:55:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:38.603 11:55:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:41:38.603 [2024-11-20 11:55:43.930055] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:41:38.603 [2024-11-20 11:55:43.930209] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84590 ] 00:41:38.603 [2024-11-20 11:55:44.098773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:38.603 [2024-11-20 11:55:44.235148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:39.541 [2024-11-20 11:55:45.183835] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:41:39.541 [2024-11-20 11:55:45.183924] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:41:39.801 [2024-11-20 11:55:45.331759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:39.801 [2024-11-20 11:55:45.331801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:41:39.801 [2024-11-20 11:55:45.331824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:41:39.801 [2024-11-20 11:55:45.331834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:39.801 [2024-11-20 11:55:45.331920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:39.801 [2024-11-20 11:55:45.331938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:41:39.801 [2024-11-20 11:55:45.331951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:41:39.801 [2024-11-20 11:55:45.331965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:39.801 [2024-11-20 11:55:45.332004] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:41:39.801 [2024-11-20 11:55:45.332822] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:41:39.802 [2024-11-20 11:55:45.332850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:39.802 [2024-11-20 11:55:45.332862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:41:39.802 [2024-11-20 11:55:45.332874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.861 ms 00:41:39.802 [2024-11-20 11:55:45.332885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:39.802 [2024-11-20 11:55:45.335685] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:41:39.802 [2024-11-20 11:55:45.350867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:39.802 [2024-11-20 11:55:45.350907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:41:39.802 [2024-11-20 11:55:45.350931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.184 ms 00:41:39.802 [2024-11-20 11:55:45.350942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:39.802 [2024-11-20 11:55:45.351010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:39.802 [2024-11-20 11:55:45.351029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:41:39.802 [2024-11-20 11:55:45.351041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:41:39.802 [2024-11-20 11:55:45.351051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:39.802 [2024-11-20 11:55:45.363495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:39.802 [2024-11-20 11:55:45.363566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:41:39.802 [2024-11-20 11:55:45.363600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.351 ms 00:41:39.802 [2024-11-20 11:55:45.363611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:39.802 [2024-11-20 11:55:45.363710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:39.802 [2024-11-20 11:55:45.363730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:41:39.802 [2024-11-20 11:55:45.363743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:41:39.802 [2024-11-20 11:55:45.363753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:39.802 [2024-11-20 11:55:45.363845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:39.802 [2024-11-20 11:55:45.363864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:41:39.802 [2024-11-20 11:55:45.363885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:41:39.802 [2024-11-20 11:55:45.363897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:39.802 [2024-11-20 11:55:45.363949] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:41:39.802 [2024-11-20 11:55:45.368991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:39.802 [2024-11-20 11:55:45.369027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:41:39.802 [2024-11-20 11:55:45.369041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.052 ms 00:41:39.802 [2024-11-20 11:55:45.369060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:39.802 [2024-11-20 11:55:45.369098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:39.802 [2024-11-20 11:55:45.369114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:41:39.802 [2024-11-20 11:55:45.369126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:41:39.802 [2024-11-20 11:55:45.369136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:39.802 [2024-11-20 11:55:45.369183] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:41:39.802 [2024-11-20 11:55:45.369216] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:41:39.802 [2024-11-20 11:55:45.369259] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:41:39.802 [2024-11-20 11:55:45.369278] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:41:39.802 [2024-11-20 11:55:45.369372] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:41:39.802 [2024-11-20 11:55:45.369387] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:41:39.802 [2024-11-20 11:55:45.369400] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:41:39.802 [2024-11-20 11:55:45.369414] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:41:39.802 [2024-11-20 11:55:45.369427] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:41:39.802 [2024-11-20 11:55:45.369487] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:41:39.802 [2024-11-20 11:55:45.369500] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:41:39.802 [2024-11-20 11:55:45.369512] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:41:39.802 [2024-11-20 11:55:45.369522] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:41:39.802 [2024-11-20 11:55:45.369535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:39.802 [2024-11-20 11:55:45.369585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:41:39.802 [2024-11-20 11:55:45.369603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.355 ms 00:41:39.802 [2024-11-20 11:55:45.369614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:39.802 [2024-11-20 11:55:45.369704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:39.802 [2024-11-20 11:55:45.369720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:41:39.802 [2024-11-20 11:55:45.369732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 00:41:39.802 [2024-11-20 11:55:45.369764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:39.802 [2024-11-20 11:55:45.369872] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:41:39.802 [2024-11-20 11:55:45.369890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:41:39.802 [2024-11-20 11:55:45.369903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:41:39.802 [2024-11-20 11:55:45.369931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:39.802 [2024-11-20 11:55:45.369956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:41:39.802 [2024-11-20 11:55:45.369965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:41:39.802 [2024-11-20 11:55:45.369975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:41:39.802 [2024-11-20 11:55:45.369985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:41:39.802 [2024-11-20 11:55:45.369996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:41:39.802 [2024-11-20 11:55:45.370006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:39.802 [2024-11-20 11:55:45.370015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:41:39.802 [2024-11-20 11:55:45.370024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:41:39.802 [2024-11-20 11:55:45.370033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:39.802 [2024-11-20 11:55:45.370044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:41:39.802 [2024-11-20 11:55:45.370054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:41:39.802 [2024-11-20 11:55:45.370064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:39.802 [2024-11-20 11:55:45.370073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:41:39.802 [2024-11-20 11:55:45.370083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:41:39.802 [2024-11-20 11:55:45.370091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:39.802 [2024-11-20 11:55:45.370101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:41:39.802 [2024-11-20 11:55:45.370110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:41:39.802 [2024-11-20 11:55:45.370119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:41:39.802 [2024-11-20 11:55:45.370128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:41:39.802 [2024-11-20 11:55:45.370138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:41:39.802 [2024-11-20 11:55:45.370148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:41:39.802 [2024-11-20 11:55:45.370188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:41:39.802 [2024-11-20 11:55:45.370198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:41:39.802 [2024-11-20 11:55:45.370208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:41:39.802 [2024-11-20 11:55:45.370218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:41:39.802 [2024-11-20 11:55:45.370228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:41:39.802 [2024-11-20 11:55:45.370239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:41:39.802 [2024-11-20 11:55:45.370249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:41:39.802 [2024-11-20 11:55:45.370259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:41:39.802 [2024-11-20 11:55:45.370269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:39.802 [2024-11-20 11:55:45.370279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:41:39.802 [2024-11-20 11:55:45.370288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:41:39.802 [2024-11-20 11:55:45.370298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:39.802 [2024-11-20 11:55:45.370308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:41:39.802 [2024-11-20 11:55:45.370317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:41:39.802 [2024-11-20 11:55:45.370326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:39.802 [2024-11-20 11:55:45.370335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:41:39.802 [2024-11-20 11:55:45.370345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:41:39.802 [2024-11-20 11:55:45.370354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:39.802 [2024-11-20 11:55:45.370372] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:41:39.802 [2024-11-20 11:55:45.370386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:41:39.802 [2024-11-20 11:55:45.370404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:41:39.802 [2024-11-20 11:55:45.370415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:39.802 [2024-11-20 11:55:45.370433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:41:39.802 [2024-11-20 11:55:45.370444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:41:39.802 [2024-11-20 11:55:45.370454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:41:39.802 [2024-11-20 11:55:45.370464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:41:39.802 [2024-11-20 11:55:45.370474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:41:39.802 [2024-11-20 11:55:45.370485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:41:39.802 [2024-11-20 11:55:45.370496] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:41:39.802 [2024-11-20 11:55:45.370510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:41:39.802 [2024-11-20 11:55:45.370523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:41:39.802 [2024-11-20 11:55:45.370534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:41:39.802 [2024-11-20 11:55:45.370545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:41:39.802 [2024-11-20 11:55:45.370555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:41:39.802 [2024-11-20 11:55:45.370566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:41:39.802 [2024-11-20 11:55:45.370577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:41:39.802 [2024-11-20 11:55:45.370588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:41:39.802 [2024-11-20 11:55:45.371106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:41:39.802 [2024-11-20 11:55:45.371205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:41:39.802 [2024-11-20 11:55:45.371345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:41:39.802 [2024-11-20 11:55:45.371406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:41:39.802 [2024-11-20 11:55:45.371613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:41:39.802 [2024-11-20 11:55:45.371735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:41:39.802 [2024-11-20 11:55:45.371941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:41:39.802 [2024-11-20 11:55:45.372011] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:41:39.802 [2024-11-20 11:55:45.372146] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:41:39.802 [2024-11-20 11:55:45.372203] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:41:39.802 [2024-11-20 11:55:45.372343] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:41:39.803 [2024-11-20 11:55:45.372433] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:41:39.803 [2024-11-20 11:55:45.372540] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:41:39.803 [2024-11-20 11:55:45.372687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:39.803 [2024-11-20 11:55:45.372730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:41:39.803 [2024-11-20 11:55:45.372774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.857 ms 00:41:39.803 [2024-11-20 11:55:45.372966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:39.803 [2024-11-20 11:55:45.373054] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:41:39.803 [2024-11-20 11:55:45.373074] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:41:43.093 [2024-11-20 11:55:48.839688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:43.093 [2024-11-20 11:55:48.839809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:41:43.093 [2024-11-20 11:55:48.839841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3466.642 ms 00:41:43.093 [2024-11-20 11:55:48.839854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:43.353 [2024-11-20 11:55:48.881713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:43.353 [2024-11-20 11:55:48.882022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:41:43.353 [2024-11-20 11:55:48.882054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.571 ms 00:41:43.353 [2024-11-20 11:55:48.882068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:43.353 [2024-11-20 11:55:48.882236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:43.353 [2024-11-20 11:55:48.882264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:41:43.353 [2024-11-20 11:55:48.882279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:41:43.353 [2024-11-20 11:55:48.882291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:43.353 [2024-11-20 11:55:48.928052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:43.353 [2024-11-20 11:55:48.928105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:41:43.353 [2024-11-20 11:55:48.928122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.687 ms 00:41:43.353 [2024-11-20 11:55:48.928140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:43.353 [2024-11-20 11:55:48.928201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:43.353 [2024-11-20 11:55:48.928217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:41:43.353 [2024-11-20 11:55:48.928229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:41:43.353 [2024-11-20 11:55:48.928240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:43.353 [2024-11-20 11:55:48.929112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:43.353 [2024-11-20 11:55:48.929142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:41:43.353 [2024-11-20 11:55:48.929156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.771 ms 00:41:43.353 [2024-11-20 11:55:48.929168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:43.353 [2024-11-20 11:55:48.929239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:43.353 [2024-11-20 11:55:48.929254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:41:43.353 [2024-11-20 11:55:48.929266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:41:43.353 [2024-11-20 11:55:48.929277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:43.353 [2024-11-20 11:55:48.951915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:43.353 [2024-11-20 11:55:48.951959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:41:43.353 [2024-11-20 11:55:48.951981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.609 ms 00:41:43.353 [2024-11-20 11:55:48.951993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:43.353 [2024-11-20 11:55:48.967444] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:41:43.353 [2024-11-20 11:55:48.967486] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:41:43.353 [2024-11-20 11:55:48.967505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:43.353 [2024-11-20 11:55:48.967518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:41:43.353 [2024-11-20 11:55:48.967852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.358 ms 00:41:43.353 [2024-11-20 11:55:48.967887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:43.353 [2024-11-20 11:55:48.983780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:43.353 [2024-11-20 11:55:48.983818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:41:43.353 [2024-11-20 11:55:48.983835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.806 ms 00:41:43.353 [2024-11-20 11:55:48.983846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:43.353 [2024-11-20 11:55:48.996811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:43.353 [2024-11-20 11:55:48.996848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:41:43.353 [2024-11-20 11:55:48.996862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.919 ms 00:41:43.353 [2024-11-20 11:55:48.996873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:43.353 [2024-11-20 11:55:49.009100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:43.353 [2024-11-20 11:55:49.009137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:41:43.353 [2024-11-20 11:55:49.009152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.182 ms 00:41:43.353 [2024-11-20 11:55:49.009162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:43.353 [2024-11-20 11:55:49.010026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:43.353 [2024-11-20 11:55:49.010066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:41:43.353 [2024-11-20 11:55:49.010081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.747 ms 00:41:43.353 [2024-11-20 11:55:49.010091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:43.353 [2024-11-20 11:55:49.095609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:43.353 [2024-11-20 11:55:49.095704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:41:43.353 [2024-11-20 11:55:49.095742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 85.486 ms 00:41:43.353 [2024-11-20 11:55:49.095753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:43.353 [2024-11-20 11:55:49.106044] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:41:43.353 [2024-11-20 11:55:49.106999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:43.353 [2024-11-20 11:55:49.107033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:41:43.353 [2024-11-20 11:55:49.107049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.171 ms 00:41:43.353 [2024-11-20 11:55:49.107060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:43.353 [2024-11-20 11:55:49.107159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:43.353 [2024-11-20 11:55:49.107182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:41:43.353 [2024-11-20 11:55:49.107195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:41:43.353 [2024-11-20 11:55:49.107206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:43.353 [2024-11-20 11:55:49.107292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:43.353 [2024-11-20 11:55:49.107311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:41:43.353 [2024-11-20 11:55:49.107324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:41:43.353 [2024-11-20 11:55:49.107349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:43.353 [2024-11-20 11:55:49.107387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:43.353 [2024-11-20 11:55:49.107402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:41:43.353 [2024-11-20 11:55:49.107414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:41:43.354 [2024-11-20 11:55:49.107432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:43.354 [2024-11-20 11:55:49.107481] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:41:43.354 [2024-11-20 11:55:49.107499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:43.354 [2024-11-20 11:55:49.107511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:41:43.354 [2024-11-20 11:55:49.107522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:41:43.354 [2024-11-20 11:55:49.107533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:43.611 [2024-11-20 11:55:49.134807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:43.611 [2024-11-20 11:55:49.134854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:41:43.611 [2024-11-20 11:55:49.134871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.201 ms 00:41:43.611 [2024-11-20 11:55:49.134882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:43.611 [2024-11-20 11:55:49.134977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:43.611 [2024-11-20 11:55:49.134996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:41:43.611 [2024-11-20 11:55:49.135009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:41:43.611 [2024-11-20 11:55:49.135020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:43.611 [2024-11-20 11:55:49.136999] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3804.537 ms, result 0 00:41:43.612 [2024-11-20 11:55:49.151198] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:43.612 [2024-11-20 11:55:49.167295] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:41:43.612 [2024-11-20 11:55:49.176730] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:41:43.612 11:55:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:43.612 11:55:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:41:43.612 11:55:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:41:43.612 11:55:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:41:43.612 11:55:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:41:43.869 [2024-11-20 11:55:49.448546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:43.869 [2024-11-20 11:55:49.448584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:41:43.869 [2024-11-20 11:55:49.448602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:41:43.869 [2024-11-20 11:55:49.448621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:43.869 [2024-11-20 11:55:49.448671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:43.869 [2024-11-20 11:55:49.448687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:41:43.869 [2024-11-20 11:55:49.448698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:41:43.869 [2024-11-20 11:55:49.448709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:43.869 [2024-11-20 11:55:49.448733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:43.869 [2024-11-20 11:55:49.448746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:41:43.869 [2024-11-20 11:55:49.448757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:41:43.869 [2024-11-20 11:55:49.448766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:43.869 [2024-11-20 11:55:49.448832] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.302 ms, result 0 00:41:43.869 true 00:41:43.869 11:55:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:41:44.128 { 00:41:44.128 "name": "ftl", 00:41:44.128 "properties": [ 00:41:44.128 { 00:41:44.128 "name": "superblock_version", 00:41:44.128 "value": 5, 00:41:44.128 "read-only": true 00:41:44.128 }, 00:41:44.128 { 00:41:44.128 "name": "base_device", 00:41:44.128 "bands": [ 00:41:44.128 { 00:41:44.128 "id": 0, 00:41:44.128 "state": "CLOSED", 00:41:44.128 "validity": 1.0 00:41:44.128 }, 00:41:44.128 { 00:41:44.128 "id": 1, 00:41:44.128 "state": "CLOSED", 00:41:44.128 "validity": 1.0 00:41:44.128 }, 00:41:44.128 { 00:41:44.128 "id": 2, 00:41:44.128 "state": "CLOSED", 00:41:44.128 "validity": 0.007843137254901933 00:41:44.128 }, 00:41:44.128 { 00:41:44.128 "id": 3, 00:41:44.128 "state": "FREE", 00:41:44.128 "validity": 0.0 00:41:44.128 }, 00:41:44.128 { 00:41:44.128 "id": 4, 00:41:44.128 "state": "FREE", 00:41:44.128 "validity": 0.0 00:41:44.128 }, 00:41:44.128 { 00:41:44.128 "id": 5, 00:41:44.128 "state": "FREE", 00:41:44.128 "validity": 0.0 00:41:44.128 }, 00:41:44.128 { 00:41:44.128 "id": 6, 00:41:44.128 "state": "FREE", 00:41:44.128 "validity": 0.0 00:41:44.128 }, 00:41:44.128 { 00:41:44.128 "id": 7, 00:41:44.128 "state": "FREE", 00:41:44.128 "validity": 0.0 00:41:44.128 }, 00:41:44.128 { 00:41:44.128 "id": 8, 00:41:44.128 "state": "FREE", 00:41:44.128 "validity": 0.0 00:41:44.128 }, 00:41:44.128 { 00:41:44.128 "id": 9, 00:41:44.128 "state": "FREE", 00:41:44.128 "validity": 0.0 00:41:44.128 }, 00:41:44.128 { 00:41:44.128 "id": 10, 00:41:44.128 "state": "FREE", 00:41:44.128 "validity": 0.0 00:41:44.128 }, 00:41:44.128 { 00:41:44.128 "id": 11, 00:41:44.128 "state": "FREE", 00:41:44.128 "validity": 0.0 00:41:44.128 }, 00:41:44.128 { 00:41:44.128 "id": 12, 00:41:44.128 "state": "FREE", 00:41:44.128 "validity": 0.0 00:41:44.128 }, 00:41:44.128 { 00:41:44.128 "id": 13, 00:41:44.128 "state": "FREE", 00:41:44.128 "validity": 0.0 00:41:44.128 }, 00:41:44.128 { 00:41:44.128 "id": 14, 00:41:44.128 "state": "FREE", 00:41:44.128 "validity": 0.0 00:41:44.128 }, 00:41:44.128 { 00:41:44.128 "id": 15, 00:41:44.128 "state": "FREE", 00:41:44.128 "validity": 0.0 00:41:44.128 }, 00:41:44.128 { 00:41:44.128 "id": 16, 00:41:44.128 "state": "FREE", 00:41:44.128 "validity": 0.0 00:41:44.128 }, 00:41:44.128 { 00:41:44.128 "id": 17, 00:41:44.128 "state": "FREE", 00:41:44.128 "validity": 0.0 00:41:44.128 } 00:41:44.129 ], 00:41:44.129 "read-only": true 00:41:44.129 }, 00:41:44.129 { 00:41:44.129 "name": "cache_device", 00:41:44.129 "type": "bdev", 00:41:44.129 "chunks": [ 00:41:44.129 { 00:41:44.129 "id": 0, 00:41:44.129 "state": "INACTIVE", 00:41:44.129 "utilization": 0.0 00:41:44.129 }, 00:41:44.129 { 00:41:44.129 "id": 1, 00:41:44.129 "state": "OPEN", 00:41:44.129 "utilization": 0.0 00:41:44.129 }, 00:41:44.129 { 00:41:44.129 "id": 2, 00:41:44.129 "state": "OPEN", 00:41:44.129 "utilization": 0.0 00:41:44.129 }, 00:41:44.129 { 00:41:44.129 "id": 3, 00:41:44.129 "state": "FREE", 00:41:44.129 "utilization": 0.0 00:41:44.129 }, 00:41:44.129 { 00:41:44.129 "id": 4, 00:41:44.129 "state": "FREE", 00:41:44.129 "utilization": 0.0 00:41:44.129 } 00:41:44.129 ], 00:41:44.129 "read-only": true 00:41:44.129 }, 00:41:44.129 { 00:41:44.129 "name": "verbose_mode", 00:41:44.129 "value": true, 00:41:44.129 "unit": "", 00:41:44.129 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:41:44.129 }, 00:41:44.129 { 00:41:44.129 "name": "prep_upgrade_on_shutdown", 00:41:44.129 "value": false, 00:41:44.129 "unit": "", 00:41:44.129 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:41:44.129 } 00:41:44.129 ] 00:41:44.129 } 00:41:44.129 11:55:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:41:44.129 11:55:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:41:44.129 11:55:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:41:44.388 11:55:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:41:44.388 11:55:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:41:44.388 11:55:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:41:44.388 11:55:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:41:44.388 11:55:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:41:44.649 11:55:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:41:44.649 11:55:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:41:44.649 11:55:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:41:44.649 11:55:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:41:44.649 11:55:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:41:44.649 11:55:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:41:44.649 Validate MD5 checksum, iteration 1 00:41:44.649 11:55:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:41:44.649 11:55:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:41:44.649 11:55:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:41:44.649 11:55:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:41:44.649 11:55:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:41:44.649 11:55:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:41:44.649 11:55:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:41:44.649 [2024-11-20 11:55:50.347372] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:41:44.649 [2024-11-20 11:55:50.347557] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84670 ] 00:41:44.908 [2024-11-20 11:55:50.516639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:44.908 [2024-11-20 11:55:50.645806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:46.814  [2024-11-20T11:55:53.517Z] Copying: 494/1024 [MB] (494 MBps) [2024-11-20T11:55:53.517Z] Copying: 968/1024 [MB] (474 MBps) [2024-11-20T11:55:55.422Z] Copying: 1024/1024 [MB] (average 484 MBps) 00:41:49.656 00:41:49.656 11:55:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:41:49.656 11:55:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:41:51.561 11:55:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:41:51.561 Validate MD5 checksum, iteration 2 00:41:51.561 11:55:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=349dee8ca5ce75ec0eaff1d1b8eac7b3 00:41:51.561 11:55:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 349dee8ca5ce75ec0eaff1d1b8eac7b3 != \3\4\9\d\e\e\8\c\a\5\c\e\7\5\e\c\0\e\a\f\f\1\d\1\b\8\e\a\c\7\b\3 ]] 00:41:51.561 11:55:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:41:51.561 11:55:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:41:51.561 11:55:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:41:51.561 11:55:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:41:51.561 11:55:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:41:51.561 11:55:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:41:51.561 11:55:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:41:51.561 11:55:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:41:51.561 11:55:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:41:51.561 [2024-11-20 11:55:57.201142] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:41:51.561 [2024-11-20 11:55:57.201361] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84743 ] 00:41:51.829 [2024-11-20 11:55:57.394900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:51.829 [2024-11-20 11:55:57.562450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:53.739  [2024-11-20T11:56:00.444Z] Copying: 448/1024 [MB] (448 MBps) [2024-11-20T11:56:00.703Z] Copying: 912/1024 [MB] (464 MBps) [2024-11-20T11:56:02.081Z] Copying: 1024/1024 [MB] (average 456 MBps) 00:41:56.315 00:41:56.315 11:56:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:41:56.315 11:56:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:41:58.220 11:56:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:41:58.220 11:56:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=d754f1c1d7e1d8e30654f548adc7f14b 00:41:58.220 11:56:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ d754f1c1d7e1d8e30654f548adc7f14b != \d\7\5\4\f\1\c\1\d\7\e\1\d\8\e\3\0\6\5\4\f\5\4\8\a\d\c\7\f\1\4\b ]] 00:41:58.220 11:56:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:41:58.220 11:56:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:41:58.220 11:56:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:41:58.220 11:56:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84590 ]] 00:41:58.220 11:56:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84590 00:41:58.220 11:56:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:41:58.220 11:56:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:41:58.220 11:56:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:41:58.220 11:56:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:41:58.220 11:56:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:41:58.220 11:56:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:41:58.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:58.220 11:56:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84810 00:41:58.220 11:56:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:41:58.220 11:56:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84810 00:41:58.220 11:56:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84810 ']' 00:41:58.220 11:56:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:58.220 11:56:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:58.220 11:56:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:58.220 11:56:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:58.220 11:56:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:41:58.220 [2024-11-20 11:56:03.879003] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:41:58.220 [2024-11-20 11:56:03.879221] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84810 ] 00:41:58.479 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84590 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:41:58.479 [2024-11-20 11:56:04.071881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:58.479 [2024-11-20 11:56:04.212795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:59.417 [2024-11-20 11:56:05.165878] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:41:59.417 [2024-11-20 11:56:05.165960] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:41:59.677 [2024-11-20 11:56:05.313880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:59.677 [2024-11-20 11:56:05.313925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:41:59.677 [2024-11-20 11:56:05.313956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:41:59.677 [2024-11-20 11:56:05.313967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:59.677 [2024-11-20 11:56:05.314037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:59.677 [2024-11-20 11:56:05.314054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:41:59.677 [2024-11-20 11:56:05.314066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:41:59.677 [2024-11-20 11:56:05.314076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:59.677 [2024-11-20 11:56:05.314113] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:41:59.677 [2024-11-20 11:56:05.314846] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:41:59.677 [2024-11-20 11:56:05.314880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:59.677 [2024-11-20 11:56:05.314893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:41:59.677 [2024-11-20 11:56:05.314904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.781 ms 00:41:59.677 [2024-11-20 11:56:05.314915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:59.677 [2024-11-20 11:56:05.315359] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:41:59.677 [2024-11-20 11:56:05.335152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:59.677 [2024-11-20 11:56:05.335207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:41:59.677 [2024-11-20 11:56:05.335229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.794 ms 00:41:59.677 [2024-11-20 11:56:05.335240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:59.677 [2024-11-20 11:56:05.344682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:59.677 [2024-11-20 11:56:05.344721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:41:59.677 [2024-11-20 11:56:05.344743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:41:59.677 [2024-11-20 11:56:05.344758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:59.677 [2024-11-20 11:56:05.345221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:59.677 [2024-11-20 11:56:05.345252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:41:59.677 [2024-11-20 11:56:05.345266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.352 ms 00:41:59.677 [2024-11-20 11:56:05.345278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:59.677 [2024-11-20 11:56:05.345349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:59.677 [2024-11-20 11:56:05.345366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:41:59.677 [2024-11-20 11:56:05.345377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:41:59.677 [2024-11-20 11:56:05.345386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:59.677 [2024-11-20 11:56:05.345419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:59.677 [2024-11-20 11:56:05.345434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:41:59.677 [2024-11-20 11:56:05.345446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:41:59.677 [2024-11-20 11:56:05.345503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:59.677 [2024-11-20 11:56:05.345560] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:41:59.677 [2024-11-20 11:56:05.348630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:59.677 [2024-11-20 11:56:05.348671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:41:59.677 [2024-11-20 11:56:05.348686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.102 ms 00:41:59.677 [2024-11-20 11:56:05.348701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:59.677 [2024-11-20 11:56:05.348736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:59.677 [2024-11-20 11:56:05.348750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:41:59.677 [2024-11-20 11:56:05.348761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:41:59.677 [2024-11-20 11:56:05.348771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:59.677 [2024-11-20 11:56:05.348813] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:41:59.677 [2024-11-20 11:56:05.348842] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:41:59.677 [2024-11-20 11:56:05.348878] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:41:59.677 [2024-11-20 11:56:05.348899] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:41:59.677 [2024-11-20 11:56:05.348990] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:41:59.677 [2024-11-20 11:56:05.349004] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:41:59.677 [2024-11-20 11:56:05.349018] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:41:59.677 [2024-11-20 11:56:05.349031] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:41:59.677 [2024-11-20 11:56:05.349043] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:41:59.678 [2024-11-20 11:56:05.349054] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:41:59.678 [2024-11-20 11:56:05.349064] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:41:59.678 [2024-11-20 11:56:05.349073] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:41:59.678 [2024-11-20 11:56:05.349083] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:41:59.678 [2024-11-20 11:56:05.349098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:59.678 [2024-11-20 11:56:05.349109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:41:59.678 [2024-11-20 11:56:05.349120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.289 ms 00:41:59.678 [2024-11-20 11:56:05.349130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:59.678 [2024-11-20 11:56:05.349204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:59.678 [2024-11-20 11:56:05.349218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:41:59.678 [2024-11-20 11:56:05.349228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:41:59.678 [2024-11-20 11:56:05.349238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:59.678 [2024-11-20 11:56:05.349327] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:41:59.678 [2024-11-20 11:56:05.349358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:41:59.678 [2024-11-20 11:56:05.349370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:41:59.678 [2024-11-20 11:56:05.349381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:59.678 [2024-11-20 11:56:05.349391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:41:59.678 [2024-11-20 11:56:05.349400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:41:59.678 [2024-11-20 11:56:05.349410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:41:59.678 [2024-11-20 11:56:05.349419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:41:59.678 [2024-11-20 11:56:05.349429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:41:59.678 [2024-11-20 11:56:05.349438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:59.678 [2024-11-20 11:56:05.349456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:41:59.678 [2024-11-20 11:56:05.349501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:41:59.678 [2024-11-20 11:56:05.349512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:59.678 [2024-11-20 11:56:05.349526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:41:59.678 [2024-11-20 11:56:05.349546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:41:59.678 [2024-11-20 11:56:05.349557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:59.678 [2024-11-20 11:56:05.349582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:41:59.678 [2024-11-20 11:56:05.349594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:41:59.678 [2024-11-20 11:56:05.349605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:59.678 [2024-11-20 11:56:05.349616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:41:59.678 [2024-11-20 11:56:05.349626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:41:59.678 [2024-11-20 11:56:05.349637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:41:59.678 [2024-11-20 11:56:05.349647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:41:59.678 [2024-11-20 11:56:05.349671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:41:59.678 [2024-11-20 11:56:05.349681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:41:59.678 [2024-11-20 11:56:05.349692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:41:59.678 [2024-11-20 11:56:05.349702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:41:59.678 [2024-11-20 11:56:05.349712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:41:59.678 [2024-11-20 11:56:05.349722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:41:59.678 [2024-11-20 11:56:05.349732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:41:59.678 [2024-11-20 11:56:05.349742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:41:59.678 [2024-11-20 11:56:05.349751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:41:59.678 [2024-11-20 11:56:05.349761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:41:59.678 [2024-11-20 11:56:05.349771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:59.678 [2024-11-20 11:56:05.349810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:41:59.678 [2024-11-20 11:56:05.349834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:41:59.678 [2024-11-20 11:56:05.349843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:59.678 [2024-11-20 11:56:05.349852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:41:59.678 [2024-11-20 11:56:05.349860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:41:59.678 [2024-11-20 11:56:05.349869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:59.678 [2024-11-20 11:56:05.349877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:41:59.678 [2024-11-20 11:56:05.349886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:41:59.678 [2024-11-20 11:56:05.349894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:59.678 [2024-11-20 11:56:05.349903] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:41:59.678 [2024-11-20 11:56:05.349913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:41:59.678 [2024-11-20 11:56:05.349924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:41:59.678 [2024-11-20 11:56:05.349934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:59.678 [2024-11-20 11:56:05.349945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:41:59.678 [2024-11-20 11:56:05.349954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:41:59.678 [2024-11-20 11:56:05.349964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:41:59.678 [2024-11-20 11:56:05.349973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:41:59.678 [2024-11-20 11:56:05.349982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:41:59.678 [2024-11-20 11:56:05.349990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:41:59.678 [2024-11-20 11:56:05.350001] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:41:59.678 [2024-11-20 11:56:05.350015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:41:59.678 [2024-11-20 11:56:05.350026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:41:59.678 [2024-11-20 11:56:05.350037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:41:59.678 [2024-11-20 11:56:05.350047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:41:59.678 [2024-11-20 11:56:05.350057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:41:59.678 [2024-11-20 11:56:05.350067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:41:59.678 [2024-11-20 11:56:05.350076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:41:59.678 [2024-11-20 11:56:05.350086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:41:59.678 [2024-11-20 11:56:05.350095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:41:59.678 [2024-11-20 11:56:05.350105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:41:59.678 [2024-11-20 11:56:05.350114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:41:59.678 [2024-11-20 11:56:05.350124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:41:59.678 [2024-11-20 11:56:05.350133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:41:59.678 [2024-11-20 11:56:05.350142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:41:59.678 [2024-11-20 11:56:05.350152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:41:59.678 [2024-11-20 11:56:05.350162] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:41:59.678 [2024-11-20 11:56:05.350172] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:41:59.678 [2024-11-20 11:56:05.350189] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:41:59.678 [2024-11-20 11:56:05.350199] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:41:59.678 [2024-11-20 11:56:05.350209] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:41:59.678 [2024-11-20 11:56:05.350218] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:41:59.678 [2024-11-20 11:56:05.350230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:59.678 [2024-11-20 11:56:05.350239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:41:59.678 [2024-11-20 11:56:05.350250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.957 ms 00:41:59.678 [2024-11-20 11:56:05.350260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:59.678 [2024-11-20 11:56:05.386157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:59.678 [2024-11-20 11:56:05.386212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:41:59.678 [2024-11-20 11:56:05.386237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.838 ms 00:41:59.678 [2024-11-20 11:56:05.386248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:59.678 [2024-11-20 11:56:05.386318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:59.678 [2024-11-20 11:56:05.386344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:41:59.678 [2024-11-20 11:56:05.386356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:41:59.678 [2024-11-20 11:56:05.386366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:59.679 [2024-11-20 11:56:05.429012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:59.679 [2024-11-20 11:56:05.429063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:41:59.679 [2024-11-20 11:56:05.429081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.559 ms 00:41:59.679 [2024-11-20 11:56:05.429095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:59.679 [2024-11-20 11:56:05.429164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:59.679 [2024-11-20 11:56:05.429182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:41:59.679 [2024-11-20 11:56:05.429204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:41:59.679 [2024-11-20 11:56:05.429221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:59.679 [2024-11-20 11:56:05.429399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:59.679 [2024-11-20 11:56:05.429418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:41:59.679 [2024-11-20 11:56:05.429431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.075 ms 00:41:59.679 [2024-11-20 11:56:05.429441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:59.679 [2024-11-20 11:56:05.429551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:59.679 [2024-11-20 11:56:05.429569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:41:59.679 [2024-11-20 11:56:05.429581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:41:59.679 [2024-11-20 11:56:05.429593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:59.938 [2024-11-20 11:56:05.452769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:59.938 [2024-11-20 11:56:05.452808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:41:59.938 [2024-11-20 11:56:05.452825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.140 ms 00:41:59.938 [2024-11-20 11:56:05.452850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:59.938 [2024-11-20 11:56:05.453009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:59.938 [2024-11-20 11:56:05.453032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:41:59.938 [2024-11-20 11:56:05.453045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:41:59.938 [2024-11-20 11:56:05.453055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:59.938 [2024-11-20 11:56:05.485511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:59.938 [2024-11-20 11:56:05.485597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:41:59.938 [2024-11-20 11:56:05.485617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.404 ms 00:41:59.938 [2024-11-20 11:56:05.485629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:59.938 [2024-11-20 11:56:05.495410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:59.938 [2024-11-20 11:56:05.495454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:41:59.938 [2024-11-20 11:56:05.495471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.536 ms 00:41:59.938 [2024-11-20 11:56:05.495487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:59.938 [2024-11-20 11:56:05.570132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:59.938 [2024-11-20 11:56:05.570232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:41:59.938 [2024-11-20 11:56:05.570257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 74.555 ms 00:41:59.938 [2024-11-20 11:56:05.570269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:59.938 [2024-11-20 11:56:05.570510] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:41:59.938 [2024-11-20 11:56:05.570698] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:41:59.938 [2024-11-20 11:56:05.570918] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:41:59.939 [2024-11-20 11:56:05.571079] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:41:59.939 [2024-11-20 11:56:05.571110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:59.939 [2024-11-20 11:56:05.571124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:41:59.939 [2024-11-20 11:56:05.571147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.775 ms 00:41:59.939 [2024-11-20 11:56:05.571160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:59.939 [2024-11-20 11:56:05.571316] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:41:59.939 [2024-11-20 11:56:05.571345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:59.939 [2024-11-20 11:56:05.571363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:41:59.939 [2024-11-20 11:56:05.571376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:41:59.939 [2024-11-20 11:56:05.571388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:59.939 [2024-11-20 11:56:05.590134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:59.939 [2024-11-20 11:56:05.590215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:41:59.939 [2024-11-20 11:56:05.590240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.689 ms 00:41:59.939 [2024-11-20 11:56:05.590252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:59.939 [2024-11-20 11:56:05.601005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:59.939 [2024-11-20 11:56:05.601047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:41:59.939 [2024-11-20 11:56:05.601062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:41:59.939 [2024-11-20 11:56:05.601073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:59.939 [2024-11-20 11:56:05.601181] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:41:59.939 [2024-11-20 11:56:05.601626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:59.939 [2024-11-20 11:56:05.601652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:41:59.939 [2024-11-20 11:56:05.601666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.448 ms 00:41:59.939 [2024-11-20 11:56:05.601678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:00.507 [2024-11-20 11:56:06.246072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:00.507 [2024-11-20 11:56:06.246156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:42:00.507 [2024-11-20 11:56:06.246206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 643.353 ms 00:42:00.507 [2024-11-20 11:56:06.246219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:00.507 [2024-11-20 11:56:06.251364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:00.507 [2024-11-20 11:56:06.251406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:42:00.507 [2024-11-20 11:56:06.251424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.427 ms 00:42:00.507 [2024-11-20 11:56:06.251459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:00.507 [2024-11-20 11:56:06.252036] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:42:00.507 [2024-11-20 11:56:06.252083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:00.507 [2024-11-20 11:56:06.252098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:42:00.507 [2024-11-20 11:56:06.252111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.585 ms 00:42:00.507 [2024-11-20 11:56:06.252123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:00.507 [2024-11-20 11:56:06.252166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:00.507 [2024-11-20 11:56:06.252185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:42:00.507 [2024-11-20 11:56:06.252198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:42:00.507 [2024-11-20 11:56:06.252218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:00.507 [2024-11-20 11:56:06.252314] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 651.119 ms, result 0 00:42:00.507 [2024-11-20 11:56:06.252383] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:42:00.507 [2024-11-20 11:56:06.252708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:00.507 [2024-11-20 11:56:06.252729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:42:00.507 [2024-11-20 11:56:06.252743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.328 ms 00:42:00.507 [2024-11-20 11:56:06.252753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:01.446 [2024-11-20 11:56:06.868887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:01.446 [2024-11-20 11:56:06.868966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:42:01.446 [2024-11-20 11:56:06.868997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 615.016 ms 00:42:01.446 [2024-11-20 11:56:06.869039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:01.446 [2024-11-20 11:56:06.874178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:01.446 [2024-11-20 11:56:06.874227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:42:01.446 [2024-11-20 11:56:06.874252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.475 ms 00:42:01.446 [2024-11-20 11:56:06.874278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:01.446 [2024-11-20 11:56:06.874796] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:42:01.446 [2024-11-20 11:56:06.874834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:01.446 [2024-11-20 11:56:06.874847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:42:01.446 [2024-11-20 11:56:06.874860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.518 ms 00:42:01.446 [2024-11-20 11:56:06.874871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:01.446 [2024-11-20 11:56:06.874912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:01.446 [2024-11-20 11:56:06.874929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:42:01.446 [2024-11-20 11:56:06.874941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:42:01.446 [2024-11-20 11:56:06.874956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:01.446 [2024-11-20 11:56:06.875001] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 622.618 ms, result 0 00:42:01.446 [2024-11-20 11:56:06.875062] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:42:01.446 [2024-11-20 11:56:06.875080] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:42:01.446 [2024-11-20 11:56:06.875095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:01.446 [2024-11-20 11:56:06.875105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:42:01.446 [2024-11-20 11:56:06.875118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1273.945 ms 00:42:01.446 [2024-11-20 11:56:06.875128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:01.446 [2024-11-20 11:56:06.875165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:01.446 [2024-11-20 11:56:06.875190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:42:01.446 [2024-11-20 11:56:06.875201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:42:01.446 [2024-11-20 11:56:06.875211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:01.446 [2024-11-20 11:56:06.886903] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:42:01.446 [2024-11-20 11:56:06.887026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:01.446 [2024-11-20 11:56:06.887043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:42:01.446 [2024-11-20 11:56:06.887056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.794 ms 00:42:01.446 [2024-11-20 11:56:06.887067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:01.446 [2024-11-20 11:56:06.887787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:01.446 [2024-11-20 11:56:06.887823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:42:01.446 [2024-11-20 11:56:06.887838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.641 ms 00:42:01.446 [2024-11-20 11:56:06.887854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:01.446 [2024-11-20 11:56:06.889988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:01.446 [2024-11-20 11:56:06.890014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:42:01.446 [2024-11-20 11:56:06.890027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.110 ms 00:42:01.446 [2024-11-20 11:56:06.890037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:01.446 [2024-11-20 11:56:06.890081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:01.446 [2024-11-20 11:56:06.890095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:42:01.446 [2024-11-20 11:56:06.890116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:42:01.446 [2024-11-20 11:56:06.890126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:01.446 [2024-11-20 11:56:06.890237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:01.446 [2024-11-20 11:56:06.890253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:42:01.446 [2024-11-20 11:56:06.890264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:42:01.446 [2024-11-20 11:56:06.890283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:01.446 [2024-11-20 11:56:06.890324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:01.446 [2024-11-20 11:56:06.890338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:42:01.446 [2024-11-20 11:56:06.890349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:42:01.446 [2024-11-20 11:56:06.890360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:01.446 [2024-11-20 11:56:06.890406] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:42:01.446 [2024-11-20 11:56:06.890422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:01.446 [2024-11-20 11:56:06.890433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:42:01.446 [2024-11-20 11:56:06.890445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:42:01.446 [2024-11-20 11:56:06.890455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:01.446 [2024-11-20 11:56:06.890523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:01.446 [2024-11-20 11:56:06.890540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:42:01.446 [2024-11-20 11:56:06.890568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:42:01.446 [2024-11-20 11:56:06.890580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:01.446 [2024-11-20 11:56:06.892117] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1577.606 ms, result 0 00:42:01.446 [2024-11-20 11:56:06.907596] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:01.446 [2024-11-20 11:56:06.923605] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:42:01.446 [2024-11-20 11:56:06.933285] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:01.446 11:56:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:01.446 11:56:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:42:01.446 11:56:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:42:01.446 11:56:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:42:01.446 11:56:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:42:01.446 11:56:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:42:01.446 11:56:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:42:01.446 11:56:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:42:01.446 Validate MD5 checksum, iteration 1 00:42:01.446 11:56:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:42:01.446 11:56:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:42:01.446 11:56:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:42:01.446 11:56:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:42:01.446 11:56:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:42:01.446 11:56:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:42:01.446 11:56:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:42:01.446 [2024-11-20 11:56:07.089750] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:42:01.446 [2024-11-20 11:56:07.089967] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84846 ] 00:42:01.706 [2024-11-20 11:56:07.285361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:01.706 [2024-11-20 11:56:07.441405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:03.613  [2024-11-20T11:56:10.326Z] Copying: 477/1024 [MB] (477 MBps) [2024-11-20T11:56:10.326Z] Copying: 946/1024 [MB] (469 MBps) [2024-11-20T11:56:12.232Z] Copying: 1024/1024 [MB] (average 468 MBps) 00:42:06.466 00:42:06.466 11:56:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:42:06.466 11:56:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:42:08.373 11:56:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:42:08.373 Validate MD5 checksum, iteration 2 00:42:08.373 11:56:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=349dee8ca5ce75ec0eaff1d1b8eac7b3 00:42:08.373 11:56:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 349dee8ca5ce75ec0eaff1d1b8eac7b3 != \3\4\9\d\e\e\8\c\a\5\c\e\7\5\e\c\0\e\a\f\f\1\d\1\b\8\e\a\c\7\b\3 ]] 00:42:08.373 11:56:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:42:08.374 11:56:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:42:08.374 11:56:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:42:08.374 11:56:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:42:08.374 11:56:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:42:08.374 11:56:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:42:08.374 11:56:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:42:08.374 11:56:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:42:08.374 11:56:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:42:08.374 [2024-11-20 11:56:13.964068] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:42:08.374 [2024-11-20 11:56:13.965055] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84920 ] 00:42:08.633 [2024-11-20 11:56:14.157798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:08.633 [2024-11-20 11:56:14.315160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:10.541  [2024-11-20T11:56:17.245Z] Copying: 477/1024 [MB] (477 MBps) [2024-11-20T11:56:17.245Z] Copying: 943/1024 [MB] (466 MBps) [2024-11-20T11:56:18.622Z] Copying: 1024/1024 [MB] (average 471 MBps) 00:42:12.856 00:42:12.856 11:56:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:42:12.856 11:56:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:42:14.759 11:56:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:42:14.759 11:56:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=d754f1c1d7e1d8e30654f548adc7f14b 00:42:14.759 11:56:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ d754f1c1d7e1d8e30654f548adc7f14b != \d\7\5\4\f\1\c\1\d\7\e\1\d\8\e\3\0\6\5\4\f\5\4\8\a\d\c\7\f\1\4\b ]] 00:42:14.759 11:56:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:42:14.759 11:56:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:42:14.759 11:56:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:42:14.759 11:56:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:42:14.759 11:56:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:42:14.759 11:56:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:42:15.018 11:56:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:42:15.018 11:56:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:42:15.018 11:56:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:42:15.018 11:56:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:42:15.018 11:56:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84810 ]] 00:42:15.018 11:56:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84810 00:42:15.018 11:56:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84810 ']' 00:42:15.018 11:56:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84810 00:42:15.018 11:56:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:42:15.018 11:56:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:15.018 11:56:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84810 00:42:15.018 killing process with pid 84810 00:42:15.018 11:56:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:15.018 11:56:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:15.018 11:56:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84810' 00:42:15.018 11:56:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84810 00:42:15.018 11:56:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84810 00:42:15.957 [2024-11-20 11:56:21.573786] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:42:15.957 [2024-11-20 11:56:21.591120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:15.957 [2024-11-20 11:56:21.591177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:42:15.957 [2024-11-20 11:56:21.591210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:42:15.957 [2024-11-20 11:56:21.591223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:15.957 [2024-11-20 11:56:21.591254] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:42:15.957 [2024-11-20 11:56:21.595101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:15.957 [2024-11-20 11:56:21.595132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:42:15.957 [2024-11-20 11:56:21.595153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.826 ms 00:42:15.957 [2024-11-20 11:56:21.595165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:15.957 [2024-11-20 11:56:21.595412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:15.957 [2024-11-20 11:56:21.595433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:42:15.957 [2024-11-20 11:56:21.595445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.219 ms 00:42:15.957 [2024-11-20 11:56:21.595457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:15.957 [2024-11-20 11:56:21.596676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:15.957 [2024-11-20 11:56:21.596713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:42:15.957 [2024-11-20 11:56:21.596729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.197 ms 00:42:15.957 [2024-11-20 11:56:21.596747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:15.957 [2024-11-20 11:56:21.597889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:15.957 [2024-11-20 11:56:21.597951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:42:15.957 [2024-11-20 11:56:21.597965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.100 ms 00:42:15.957 [2024-11-20 11:56:21.597977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:15.957 [2024-11-20 11:56:21.610245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:15.957 [2024-11-20 11:56:21.610286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:42:15.957 [2024-11-20 11:56:21.610304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.210 ms 00:42:15.957 [2024-11-20 11:56:21.610329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:15.957 [2024-11-20 11:56:21.616952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:15.957 [2024-11-20 11:56:21.616997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:42:15.957 [2024-11-20 11:56:21.617013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.568 ms 00:42:15.957 [2024-11-20 11:56:21.617025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:15.957 [2024-11-20 11:56:21.617108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:15.957 [2024-11-20 11:56:21.617127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:42:15.957 [2024-11-20 11:56:21.617140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:42:15.957 [2024-11-20 11:56:21.617158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:15.957 [2024-11-20 11:56:21.628231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:15.957 [2024-11-20 11:56:21.628266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:42:15.957 [2024-11-20 11:56:21.628282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.050 ms 00:42:15.957 [2024-11-20 11:56:21.628293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:15.957 [2024-11-20 11:56:21.639371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:15.957 [2024-11-20 11:56:21.639408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:42:15.957 [2024-11-20 11:56:21.639423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.037 ms 00:42:15.957 [2024-11-20 11:56:21.639433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:15.957 [2024-11-20 11:56:21.650563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:15.957 [2024-11-20 11:56:21.650608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:42:15.957 [2024-11-20 11:56:21.650624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.090 ms 00:42:15.957 [2024-11-20 11:56:21.650635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:15.957 [2024-11-20 11:56:21.661681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:15.957 [2024-11-20 11:56:21.661719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:42:15.957 [2024-11-20 11:56:21.661735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.970 ms 00:42:15.957 [2024-11-20 11:56:21.661747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:15.957 [2024-11-20 11:56:21.661805] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:42:15.957 [2024-11-20 11:56:21.661830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:42:15.957 [2024-11-20 11:56:21.661847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:42:15.957 [2024-11-20 11:56:21.661860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:42:15.957 [2024-11-20 11:56:21.661873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:42:15.957 [2024-11-20 11:56:21.661886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:42:15.957 [2024-11-20 11:56:21.661899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:42:15.957 [2024-11-20 11:56:21.661911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:42:15.957 [2024-11-20 11:56:21.661923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:42:15.957 [2024-11-20 11:56:21.661936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:42:15.957 [2024-11-20 11:56:21.661948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:42:15.957 [2024-11-20 11:56:21.661961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:42:15.957 [2024-11-20 11:56:21.661989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:42:15.957 [2024-11-20 11:56:21.662000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:42:15.957 [2024-11-20 11:56:21.662013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:42:15.957 [2024-11-20 11:56:21.662025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:42:15.957 [2024-11-20 11:56:21.662038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:42:15.957 [2024-11-20 11:56:21.662049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:42:15.957 [2024-11-20 11:56:21.662062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:42:15.958 [2024-11-20 11:56:21.662077] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:42:15.958 [2024-11-20 11:56:21.662090] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: bb4a33d2-f5a4-4ef7-b326-f904d00d0155 00:42:15.958 [2024-11-20 11:56:21.662102] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:42:15.958 [2024-11-20 11:56:21.662113] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:42:15.958 [2024-11-20 11:56:21.662124] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:42:15.958 [2024-11-20 11:56:21.662136] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:42:15.958 [2024-11-20 11:56:21.662148] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:42:15.958 [2024-11-20 11:56:21.662159] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:42:15.958 [2024-11-20 11:56:21.662179] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:42:15.958 [2024-11-20 11:56:21.662189] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:42:15.958 [2024-11-20 11:56:21.662199] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:42:15.958 [2024-11-20 11:56:21.662210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:15.958 [2024-11-20 11:56:21.662221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:42:15.958 [2024-11-20 11:56:21.662234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.408 ms 00:42:15.958 [2024-11-20 11:56:21.662246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:15.958 [2024-11-20 11:56:21.679096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:15.958 [2024-11-20 11:56:21.679184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:42:15.958 [2024-11-20 11:56:21.679203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.821 ms 00:42:15.958 [2024-11-20 11:56:21.679215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:15.958 [2024-11-20 11:56:21.679753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:15.958 [2024-11-20 11:56:21.679781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:42:15.958 [2024-11-20 11:56:21.679796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.477 ms 00:42:15.958 [2024-11-20 11:56:21.679808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:16.218 [2024-11-20 11:56:21.736389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:16.218 [2024-11-20 11:56:21.736470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:42:16.218 [2024-11-20 11:56:21.736501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:16.218 [2024-11-20 11:56:21.736513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:16.218 [2024-11-20 11:56:21.736606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:16.218 [2024-11-20 11:56:21.736624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:42:16.218 [2024-11-20 11:56:21.736637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:16.218 [2024-11-20 11:56:21.736649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:16.218 [2024-11-20 11:56:21.736809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:16.218 [2024-11-20 11:56:21.736829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:42:16.218 [2024-11-20 11:56:21.736842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:16.218 [2024-11-20 11:56:21.736854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:16.218 [2024-11-20 11:56:21.736894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:16.218 [2024-11-20 11:56:21.736910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:42:16.218 [2024-11-20 11:56:21.736923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:16.218 [2024-11-20 11:56:21.736934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:16.218 [2024-11-20 11:56:21.841516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:16.218 [2024-11-20 11:56:21.841607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:42:16.218 [2024-11-20 11:56:21.841628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:16.218 [2024-11-20 11:56:21.841642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:16.218 [2024-11-20 11:56:21.921973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:16.218 [2024-11-20 11:56:21.922076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:42:16.218 [2024-11-20 11:56:21.922100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:16.218 [2024-11-20 11:56:21.922112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:16.218 [2024-11-20 11:56:21.922263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:16.218 [2024-11-20 11:56:21.922282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:42:16.218 [2024-11-20 11:56:21.922294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:16.218 [2024-11-20 11:56:21.922305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:16.218 [2024-11-20 11:56:21.922399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:16.218 [2024-11-20 11:56:21.922425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:42:16.218 [2024-11-20 11:56:21.922438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:16.218 [2024-11-20 11:56:21.922462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:16.218 [2024-11-20 11:56:21.922637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:16.218 [2024-11-20 11:56:21.922680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:42:16.218 [2024-11-20 11:56:21.922694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:16.218 [2024-11-20 11:56:21.922706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:16.218 [2024-11-20 11:56:21.922761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:16.218 [2024-11-20 11:56:21.922779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:42:16.218 [2024-11-20 11:56:21.922799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:16.218 [2024-11-20 11:56:21.922816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:16.218 [2024-11-20 11:56:21.922870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:16.218 [2024-11-20 11:56:21.922885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:42:16.218 [2024-11-20 11:56:21.922897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:16.218 [2024-11-20 11:56:21.922908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:16.218 [2024-11-20 11:56:21.922983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:16.218 [2024-11-20 11:56:21.923005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:42:16.218 [2024-11-20 11:56:21.923017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:16.218 [2024-11-20 11:56:21.923028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:16.218 [2024-11-20 11:56:21.923186] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 332.023 ms, result 0 00:42:17.596 11:56:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:42:17.596 11:56:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:42:17.596 11:56:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:42:17.596 11:56:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:42:17.596 11:56:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:42:17.596 11:56:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:42:17.596 Remove shared memory files 00:42:17.596 11:56:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:42:17.596 11:56:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:42:17.597 11:56:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:42:17.597 11:56:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:42:17.597 11:56:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84590 00:42:17.597 11:56:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:42:17.597 11:56:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:42:17.597 00:42:17.597 real 1m31.741s 00:42:17.597 user 2m5.984s 00:42:17.597 sys 0m28.892s 00:42:17.597 11:56:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:17.597 11:56:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:42:17.597 ************************************ 00:42:17.597 END TEST ftl_upgrade_shutdown 00:42:17.597 ************************************ 00:42:17.597 11:56:23 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:42:17.597 11:56:23 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:42:17.597 11:56:23 ftl -- ftl/ftl.sh@14 -- # killprocess 76856 00:42:17.597 11:56:23 ftl -- common/autotest_common.sh@954 -- # '[' -z 76856 ']' 00:42:17.597 11:56:23 ftl -- common/autotest_common.sh@958 -- # kill -0 76856 00:42:17.597 Process with pid 76856 is not found 00:42:17.597 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76856) - No such process 00:42:17.597 11:56:23 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 76856 is not found' 00:42:17.597 11:56:23 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:42:17.597 11:56:23 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=85049 00:42:17.597 11:56:23 ftl -- ftl/ftl.sh@20 -- # waitforlisten 85049 00:42:17.597 11:56:23 ftl -- common/autotest_common.sh@835 -- # '[' -z 85049 ']' 00:42:17.597 11:56:23 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:17.597 11:56:23 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:17.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:17.597 11:56:23 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:17.597 11:56:23 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:17.597 11:56:23 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:17.597 11:56:23 ftl -- common/autotest_common.sh@10 -- # set +x 00:42:17.597 [2024-11-20 11:56:23.225995] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:42:17.597 [2024-11-20 11:56:23.226191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85049 ] 00:42:17.855 [2024-11-20 11:56:23.407833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:17.855 [2024-11-20 11:56:23.542244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:18.792 11:56:24 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:18.792 11:56:24 ftl -- common/autotest_common.sh@868 -- # return 0 00:42:18.792 11:56:24 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:42:19.051 nvme0n1 00:42:19.051 11:56:24 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:42:19.051 11:56:24 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:42:19.051 11:56:24 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:42:19.620 11:56:25 ftl -- ftl/common.sh@28 -- # stores=69a96526-865d-45bc-8ac3-b171123225a5 00:42:19.620 11:56:25 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:42:19.620 11:56:25 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 69a96526-865d-45bc-8ac3-b171123225a5 00:42:19.620 11:56:25 ftl -- ftl/ftl.sh@23 -- # killprocess 85049 00:42:19.620 11:56:25 ftl -- common/autotest_common.sh@954 -- # '[' -z 85049 ']' 00:42:19.620 11:56:25 ftl -- common/autotest_common.sh@958 -- # kill -0 85049 00:42:19.620 11:56:25 ftl -- common/autotest_common.sh@959 -- # uname 00:42:19.620 11:56:25 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:19.620 11:56:25 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85049 00:42:19.879 11:56:25 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:19.879 11:56:25 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:19.879 killing process with pid 85049 00:42:19.879 11:56:25 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85049' 00:42:19.879 11:56:25 ftl -- common/autotest_common.sh@973 -- # kill 85049 00:42:19.879 11:56:25 ftl -- common/autotest_common.sh@978 -- # wait 85049 00:42:22.443 11:56:27 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:42:22.443 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:42:22.443 Waiting for block devices as requested 00:42:22.443 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:42:22.443 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:42:22.443 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:42:22.702 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:42:27.991 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:42:27.991 Remove shared memory files 00:42:27.991 11:56:33 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:42:27.991 11:56:33 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:42:27.991 11:56:33 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:42:27.991 11:56:33 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:42:27.991 11:56:33 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:42:27.991 11:56:33 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:42:27.991 11:56:33 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:42:27.991 00:42:27.991 real 12m25.953s 00:42:27.991 user 15m23.726s 00:42:27.991 sys 1m42.866s 00:42:27.991 11:56:33 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:27.991 11:56:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:42:27.991 ************************************ 00:42:27.991 END TEST ftl 00:42:27.991 ************************************ 00:42:27.991 11:56:33 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:42:27.991 11:56:33 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:42:27.991 11:56:33 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:42:27.991 11:56:33 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:42:27.991 11:56:33 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:42:27.991 11:56:33 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:42:27.991 11:56:33 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:42:27.991 11:56:33 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:42:27.991 11:56:33 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:42:27.991 11:56:33 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:42:27.991 11:56:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:27.991 11:56:33 -- common/autotest_common.sh@10 -- # set +x 00:42:27.991 11:56:33 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:42:27.991 11:56:33 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:42:27.991 11:56:33 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:42:27.991 11:56:33 -- common/autotest_common.sh@10 -- # set +x 00:42:29.897 INFO: APP EXITING 00:42:29.897 INFO: killing all VMs 00:42:29.897 INFO: killing vhost app 00:42:29.897 INFO: EXIT DONE 00:42:29.897 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:42:30.465 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:42:30.465 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:42:30.465 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:42:30.465 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:42:30.723 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:42:31.291 Cleaning 00:42:31.291 Removing: /var/run/dpdk/spdk0/config 00:42:31.291 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:42:31.291 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:42:31.291 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:42:31.291 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:42:31.291 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:42:31.291 Removing: /var/run/dpdk/spdk0/hugepage_info 00:42:31.292 Removing: /var/run/dpdk/spdk0 00:42:31.292 Removing: /var/run/dpdk/spdk_pid57781 00:42:31.292 Removing: /var/run/dpdk/spdk_pid58011 00:42:31.292 Removing: /var/run/dpdk/spdk_pid58240 00:42:31.292 Removing: /var/run/dpdk/spdk_pid58348 00:42:31.292 Removing: /var/run/dpdk/spdk_pid58400 00:42:31.292 Removing: /var/run/dpdk/spdk_pid58528 00:42:31.292 Removing: /var/run/dpdk/spdk_pid58557 00:42:31.292 Removing: /var/run/dpdk/spdk_pid58767 00:42:31.292 Removing: /var/run/dpdk/spdk_pid58873 00:42:31.292 Removing: /var/run/dpdk/spdk_pid58980 00:42:31.292 Removing: /var/run/dpdk/spdk_pid59102 00:42:31.292 Removing: /var/run/dpdk/spdk_pid59216 00:42:31.292 Removing: /var/run/dpdk/spdk_pid59255 00:42:31.292 Removing: /var/run/dpdk/spdk_pid59292 00:42:31.292 Removing: /var/run/dpdk/spdk_pid59368 00:42:31.292 Removing: /var/run/dpdk/spdk_pid59474 00:42:31.292 Removing: /var/run/dpdk/spdk_pid59945 00:42:31.292 Removing: /var/run/dpdk/spdk_pid60020 00:42:31.292 Removing: /var/run/dpdk/spdk_pid60096 00:42:31.292 Removing: /var/run/dpdk/spdk_pid60118 00:42:31.292 Removing: /var/run/dpdk/spdk_pid60272 00:42:31.292 Removing: /var/run/dpdk/spdk_pid60294 00:42:31.292 Removing: /var/run/dpdk/spdk_pid60442 00:42:31.292 Removing: /var/run/dpdk/spdk_pid60458 00:42:31.292 Removing: /var/run/dpdk/spdk_pid60534 00:42:31.292 Removing: /var/run/dpdk/spdk_pid60553 00:42:31.292 Removing: /var/run/dpdk/spdk_pid60617 00:42:31.292 Removing: /var/run/dpdk/spdk_pid60635 00:42:31.292 Removing: /var/run/dpdk/spdk_pid60830 00:42:31.292 Removing: /var/run/dpdk/spdk_pid60867 00:42:31.292 Removing: /var/run/dpdk/spdk_pid60956 00:42:31.292 Removing: /var/run/dpdk/spdk_pid61144 00:42:31.292 Removing: /var/run/dpdk/spdk_pid61245 00:42:31.292 Removing: /var/run/dpdk/spdk_pid61287 00:42:31.292 Removing: /var/run/dpdk/spdk_pid61757 00:42:31.292 Removing: /var/run/dpdk/spdk_pid61859 00:42:31.292 Removing: /var/run/dpdk/spdk_pid61975 00:42:31.292 Removing: /var/run/dpdk/spdk_pid62028 00:42:31.292 Removing: /var/run/dpdk/spdk_pid62062 00:42:31.292 Removing: /var/run/dpdk/spdk_pid62145 00:42:31.292 Removing: /var/run/dpdk/spdk_pid62778 00:42:31.292 Removing: /var/run/dpdk/spdk_pid62820 00:42:31.292 Removing: /var/run/dpdk/spdk_pid63344 00:42:31.292 Removing: /var/run/dpdk/spdk_pid63453 00:42:31.292 Removing: /var/run/dpdk/spdk_pid63568 00:42:31.292 Removing: /var/run/dpdk/spdk_pid63621 00:42:31.292 Removing: /var/run/dpdk/spdk_pid63652 00:42:31.292 Removing: /var/run/dpdk/spdk_pid63677 00:42:31.292 Removing: /var/run/dpdk/spdk_pid65578 00:42:31.292 Removing: /var/run/dpdk/spdk_pid65722 00:42:31.292 Removing: /var/run/dpdk/spdk_pid65726 00:42:31.292 Removing: /var/run/dpdk/spdk_pid65743 00:42:31.292 Removing: /var/run/dpdk/spdk_pid65788 00:42:31.292 Removing: /var/run/dpdk/spdk_pid65792 00:42:31.292 Removing: /var/run/dpdk/spdk_pid65804 00:42:31.292 Removing: /var/run/dpdk/spdk_pid65854 00:42:31.292 Removing: /var/run/dpdk/spdk_pid65858 00:42:31.292 Removing: /var/run/dpdk/spdk_pid65870 00:42:31.292 Removing: /var/run/dpdk/spdk_pid65920 00:42:31.292 Removing: /var/run/dpdk/spdk_pid65924 00:42:31.292 Removing: /var/run/dpdk/spdk_pid65936 00:42:31.292 Removing: /var/run/dpdk/spdk_pid67341 00:42:31.292 Removing: /var/run/dpdk/spdk_pid67460 00:42:31.292 Removing: /var/run/dpdk/spdk_pid68878 00:42:31.292 Removing: /var/run/dpdk/spdk_pid70611 00:42:31.292 Removing: /var/run/dpdk/spdk_pid70692 00:42:31.292 Removing: /var/run/dpdk/spdk_pid70773 00:42:31.292 Removing: /var/run/dpdk/spdk_pid70883 00:42:31.292 Removing: /var/run/dpdk/spdk_pid70975 00:42:31.292 Removing: /var/run/dpdk/spdk_pid71076 00:42:31.292 Removing: /var/run/dpdk/spdk_pid71156 00:42:31.292 Removing: /var/run/dpdk/spdk_pid71232 00:42:31.292 Removing: /var/run/dpdk/spdk_pid71342 00:42:31.292 Removing: /var/run/dpdk/spdk_pid71434 00:42:31.292 Removing: /var/run/dpdk/spdk_pid71541 00:42:31.292 Removing: /var/run/dpdk/spdk_pid71615 00:42:31.292 Removing: /var/run/dpdk/spdk_pid71697 00:42:31.292 Removing: /var/run/dpdk/spdk_pid71807 00:42:31.292 Removing: /var/run/dpdk/spdk_pid71903 00:42:31.292 Removing: /var/run/dpdk/spdk_pid72006 00:42:31.292 Removing: /var/run/dpdk/spdk_pid72087 00:42:31.292 Removing: /var/run/dpdk/spdk_pid72162 00:42:31.292 Removing: /var/run/dpdk/spdk_pid72272 00:42:31.551 Removing: /var/run/dpdk/spdk_pid72364 00:42:31.551 Removing: /var/run/dpdk/spdk_pid72465 00:42:31.551 Removing: /var/run/dpdk/spdk_pid72545 00:42:31.551 Removing: /var/run/dpdk/spdk_pid72624 00:42:31.551 Removing: /var/run/dpdk/spdk_pid72694 00:42:31.551 Removing: /var/run/dpdk/spdk_pid72776 00:42:31.551 Removing: /var/run/dpdk/spdk_pid72886 00:42:31.551 Removing: /var/run/dpdk/spdk_pid72979 00:42:31.551 Removing: /var/run/dpdk/spdk_pid73075 00:42:31.551 Removing: /var/run/dpdk/spdk_pid73155 00:42:31.551 Removing: /var/run/dpdk/spdk_pid73233 00:42:31.551 Removing: /var/run/dpdk/spdk_pid73313 00:42:31.551 Removing: /var/run/dpdk/spdk_pid73386 00:42:31.551 Removing: /var/run/dpdk/spdk_pid73495 00:42:31.551 Removing: /var/run/dpdk/spdk_pid73593 00:42:31.551 Removing: /var/run/dpdk/spdk_pid73737 00:42:31.551 Removing: /var/run/dpdk/spdk_pid74027 00:42:31.551 Removing: /var/run/dpdk/spdk_pid74069 00:42:31.551 Removing: /var/run/dpdk/spdk_pid74550 00:42:31.551 Removing: /var/run/dpdk/spdk_pid74735 00:42:31.551 Removing: /var/run/dpdk/spdk_pid74844 00:42:31.551 Removing: /var/run/dpdk/spdk_pid74954 00:42:31.551 Removing: /var/run/dpdk/spdk_pid75009 00:42:31.551 Removing: /var/run/dpdk/spdk_pid75040 00:42:31.551 Removing: /var/run/dpdk/spdk_pid75328 00:42:31.551 Removing: /var/run/dpdk/spdk_pid75404 00:42:31.551 Removing: /var/run/dpdk/spdk_pid75491 00:42:31.551 Removing: /var/run/dpdk/spdk_pid75918 00:42:31.551 Removing: /var/run/dpdk/spdk_pid76065 00:42:31.551 Removing: /var/run/dpdk/spdk_pid76856 00:42:31.551 Removing: /var/run/dpdk/spdk_pid77000 00:42:31.551 Removing: /var/run/dpdk/spdk_pid77200 00:42:31.551 Removing: /var/run/dpdk/spdk_pid77307 00:42:31.551 Removing: /var/run/dpdk/spdk_pid77678 00:42:31.551 Removing: /var/run/dpdk/spdk_pid77962 00:42:31.551 Removing: /var/run/dpdk/spdk_pid78313 00:42:31.551 Removing: /var/run/dpdk/spdk_pid78523 00:42:31.551 Removing: /var/run/dpdk/spdk_pid78663 00:42:31.551 Removing: /var/run/dpdk/spdk_pid78723 00:42:31.551 Removing: /var/run/dpdk/spdk_pid78877 00:42:31.551 Removing: /var/run/dpdk/spdk_pid78908 00:42:31.551 Removing: /var/run/dpdk/spdk_pid78976 00:42:31.551 Removing: /var/run/dpdk/spdk_pid79191 00:42:31.551 Removing: /var/run/dpdk/spdk_pid79432 00:42:31.551 Removing: /var/run/dpdk/spdk_pid79890 00:42:31.551 Removing: /var/run/dpdk/spdk_pid80344 00:42:31.551 Removing: /var/run/dpdk/spdk_pid80809 00:42:31.551 Removing: /var/run/dpdk/spdk_pid81348 00:42:31.551 Removing: /var/run/dpdk/spdk_pid81509 00:42:31.551 Removing: /var/run/dpdk/spdk_pid81614 00:42:31.551 Removing: /var/run/dpdk/spdk_pid82381 00:42:31.551 Removing: /var/run/dpdk/spdk_pid82456 00:42:31.551 Removing: /var/run/dpdk/spdk_pid82945 00:42:31.551 Removing: /var/run/dpdk/spdk_pid83400 00:42:31.551 Removing: /var/run/dpdk/spdk_pid83995 00:42:31.551 Removing: /var/run/dpdk/spdk_pid84123 00:42:31.551 Removing: /var/run/dpdk/spdk_pid84170 00:42:31.551 Removing: /var/run/dpdk/spdk_pid84240 00:42:31.551 Removing: /var/run/dpdk/spdk_pid84302 00:42:31.551 Removing: /var/run/dpdk/spdk_pid84366 00:42:31.551 Removing: /var/run/dpdk/spdk_pid84590 00:42:31.551 Removing: /var/run/dpdk/spdk_pid84670 00:42:31.551 Removing: /var/run/dpdk/spdk_pid84743 00:42:31.551 Removing: /var/run/dpdk/spdk_pid84810 00:42:31.551 Removing: /var/run/dpdk/spdk_pid84846 00:42:31.551 Removing: /var/run/dpdk/spdk_pid84920 00:42:31.551 Removing: /var/run/dpdk/spdk_pid85049 00:42:31.551 Clean 00:42:31.810 11:56:37 -- common/autotest_common.sh@1453 -- # return 0 00:42:31.810 11:56:37 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:42:31.810 11:56:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:31.810 11:56:37 -- common/autotest_common.sh@10 -- # set +x 00:42:31.810 11:56:37 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:42:31.811 11:56:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:31.811 11:56:37 -- common/autotest_common.sh@10 -- # set +x 00:42:31.811 11:56:37 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:42:31.811 11:56:37 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:42:31.811 11:56:37 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:42:31.811 11:56:37 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:42:31.811 11:56:37 -- spdk/autotest.sh@398 -- # hostname 00:42:31.811 11:56:37 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:42:32.069 geninfo: WARNING: invalid characters removed from testname! 00:42:58.616 11:57:01 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:42:59.996 11:57:05 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:43:03.284 11:57:08 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:43:05.820 11:57:11 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:43:09.116 11:57:14 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:43:11.661 11:57:16 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:43:14.194 11:57:19 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:43:14.194 11:57:19 -- spdk/autorun.sh@1 -- $ timing_finish 00:43:14.194 11:57:19 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:43:14.194 11:57:19 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:43:14.194 11:57:19 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:43:14.194 11:57:19 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:43:14.194 + [[ -n 5403 ]] 00:43:14.194 + sudo kill 5403 00:43:14.203 [Pipeline] } 00:43:14.219 [Pipeline] // timeout 00:43:14.224 [Pipeline] } 00:43:14.238 [Pipeline] // stage 00:43:14.243 [Pipeline] } 00:43:14.256 [Pipeline] // catchError 00:43:14.265 [Pipeline] stage 00:43:14.267 [Pipeline] { (Stop VM) 00:43:14.280 [Pipeline] sh 00:43:14.558 + vagrant halt 00:43:17.847 ==> default: Halting domain... 00:43:24.423 [Pipeline] sh 00:43:24.702 + vagrant destroy -f 00:43:28.020 ==> default: Removing domain... 00:43:28.600 [Pipeline] sh 00:43:28.881 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:43:28.890 [Pipeline] } 00:43:28.906 [Pipeline] // stage 00:43:28.913 [Pipeline] } 00:43:28.928 [Pipeline] // dir 00:43:28.935 [Pipeline] } 00:43:28.949 [Pipeline] // wrap 00:43:28.955 [Pipeline] } 00:43:28.967 [Pipeline] // catchError 00:43:28.977 [Pipeline] stage 00:43:28.979 [Pipeline] { (Epilogue) 00:43:28.995 [Pipeline] sh 00:43:29.281 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:43:35.862 [Pipeline] catchError 00:43:35.864 [Pipeline] { 00:43:35.877 [Pipeline] sh 00:43:36.157 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:43:36.157 Artifacts sizes are good 00:43:36.166 [Pipeline] } 00:43:36.181 [Pipeline] // catchError 00:43:36.191 [Pipeline] archiveArtifacts 00:43:36.199 Archiving artifacts 00:43:36.303 [Pipeline] cleanWs 00:43:36.313 [WS-CLEANUP] Deleting project workspace... 00:43:36.313 [WS-CLEANUP] Deferred wipeout is used... 00:43:36.320 [WS-CLEANUP] done 00:43:36.322 [Pipeline] } 00:43:36.336 [Pipeline] // stage 00:43:36.340 [Pipeline] } 00:43:36.354 [Pipeline] // node 00:43:36.359 [Pipeline] End of Pipeline 00:43:36.395 Finished: SUCCESS